Test Report: Docker_Linux_crio_arm64 21409

                    
                      432f5d8b8de395ddce63f21c968df47ae82ccbe6:2025-10-18:41964
                    
                

Test fail (45/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.57
35 TestAddons/parallel/Registry 15.35
36 TestAddons/parallel/RegistryCreds 0.52
37 TestAddons/parallel/Ingress 145.63
38 TestAddons/parallel/InspektorGadget 6.27
39 TestAddons/parallel/MetricsServer 6.36
41 TestAddons/parallel/CSI 46.89
42 TestAddons/parallel/Headlamp 3.16
43 TestAddons/parallel/CloudSpanner 5.3
44 TestAddons/parallel/LocalPath 8.57
45 TestAddons/parallel/NvidiaDevicePlugin 6.27
46 TestAddons/parallel/Yakd 5.26
98 TestFunctional/parallel/ServiceCmdConnect 603.52
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.15
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.89
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.2
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
144 TestFunctional/parallel/ServiceCmd/DeployApp 600.91
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
154 TestFunctional/parallel/ServiceCmd/Format 0.5
155 TestFunctional/parallel/ServiceCmd/URL 0.52
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 504.53
175 TestMultiControlPlane/serial/DeleteSecondaryNode 5.48
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 6.43
177 TestMultiControlPlane/serial/StopCluster 13.71
178 TestMultiControlPlane/serial/RestartCluster 177.01
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 3.92
180 TestMultiControlPlane/serial/AddSecondaryNode 94.16
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 4.68
191 TestJSONOutput/pause/Command 1.77
197 TestJSONOutput/unpause/Command 2.21
292 TestPause/serial/Pause 7.79
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.47
303 TestStartStop/group/old-k8s-version/serial/Pause 6.5
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.57
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.8
321 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.65
327 TestStartStop/group/embed-certs/serial/Pause 7.83
332 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.63
333 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.16
343 TestStartStop/group/newest-cni/serial/Pause 7.61
348 TestStartStop/group/no-preload/serial/Pause 6.28
x
+
TestAddons/serial/Volcano (0.57s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-164474 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-164474 addons disable volcano --alsologtostderr -v=1: exit status 11 (564.206584ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 17:14:59.197362   11020 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:14:59.199754   11020 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:14:59.199769   11020 out.go:374] Setting ErrFile to fd 2...
	I1018 17:14:59.199774   11020 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:14:59.200084   11020 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:14:59.200440   11020 mustload.go:65] Loading cluster: addons-164474
	I1018 17:14:59.200844   11020 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:14:59.200864   11020 addons.go:606] checking whether the cluster is paused
	I1018 17:14:59.201159   11020 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:14:59.201180   11020 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:14:59.201688   11020 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:14:59.236259   11020 ssh_runner.go:195] Run: systemctl --version
	I1018 17:14:59.236322   11020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:14:59.253244   11020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:14:59.355660   11020 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:14:59.355805   11020 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:14:59.384105   11020 cri.go:89] found id: "968c95a146a7fe08d5189eee29bd9582f2894b5d0f04e78e794058e86e194f17"
	I1018 17:14:59.384135   11020 cri.go:89] found id: "7657f768a8a9ac41bcd1f5e7a196579a7dcf31f08b605bba0bd11acb46369892"
	I1018 17:14:59.384140   11020 cri.go:89] found id: "a4777ba56bbe130ce2d0759f981f7a5a7a81a6f76b26c9602759d75786f28075"
	I1018 17:14:59.384144   11020 cri.go:89] found id: "cdf72845ca4f04b7f38a96e8e2bc2c5bff55db097097fe86438572754061e4d1"
	I1018 17:14:59.384147   11020 cri.go:89] found id: "cbf41849e12c028d15eee86acc3c0fcaf5d31af35d656b7935de4a45730fb182"
	I1018 17:14:59.384152   11020 cri.go:89] found id: "cd1c762de0b5dd26a00d004eb60c3a0356920d2d898bf210120e83239de379d3"
	I1018 17:14:59.384155   11020 cri.go:89] found id: "f97b941babec4dfdf104ffdbe7459e396a64a17a6edfa11989d9170c5b5365e2"
	I1018 17:14:59.384159   11020 cri.go:89] found id: "c763b99ed4a70e785446e888023cdfabc0fdeb6e7dcb1a84844d98d22b841291"
	I1018 17:14:59.384162   11020 cri.go:89] found id: "26297b4bb562054967554961013b4aecf4a819a64b9615266425ddb33797d349"
	I1018 17:14:59.384171   11020 cri.go:89] found id: "14f2f76f82dc964b8b157e088100913e80feaa2be642ecc8b72fea78bd2a0ed1"
	I1018 17:14:59.384179   11020 cri.go:89] found id: "901f9bb2898fac636a6903ad516f9b140591198721e4e2bfd30c9ab9155a01ed"
	I1018 17:14:59.384182   11020 cri.go:89] found id: "8a59f8ac6ef2822e7088c9cd1a68272c147739f96eaf27abf4a85d43c140b0ea"
	I1018 17:14:59.384185   11020 cri.go:89] found id: "ce684ca523f08f4af3d1134e239085b099cc9e2cd0f8679963ba4f111fcf7567"
	I1018 17:14:59.384188   11020 cri.go:89] found id: "f402fe3063f55e7003a2aaac453c55c6b2139f8fa75d1a062b447a4a5a8f278c"
	I1018 17:14:59.384192   11020 cri.go:89] found id: "6865806b912ab6d902d766fb60959288c01cc7c01f0f6d41ece13a1484e43f45"
	I1018 17:14:59.384200   11020 cri.go:89] found id: "8d07fa8a1c45fb2b7f3f20b332023c1b057391cd1e4435eb47db001464e9ada7"
	I1018 17:14:59.384208   11020 cri.go:89] found id: "ece6fd8e36b7414b9ea8a96fa9d85543498f89e17705fa3bc262b1570f482b24"
	I1018 17:14:59.384212   11020 cri.go:89] found id: "d12b84a60111629c5268442b96bd59e440c9aec3f86f326d9528b07daa476596"
	I1018 17:14:59.384216   11020 cri.go:89] found id: "d87115dc1b972147e18ebd00d21f7d791e5831c69fbef5f5e25fb2fade668bf7"
	I1018 17:14:59.384219   11020 cri.go:89] found id: "07f016f168b62771c5ab60ab8215041fcead58a20ef1da5932bcb8d6da58077f"
	I1018 17:14:59.384224   11020 cri.go:89] found id: "f085bccd65219cd8bb8d59ffcc8bee71589bead44d17e3e6fe5269fe6781f2f3"
	I1018 17:14:59.384227   11020 cri.go:89] found id: "4a1b92f8cd14a17c1e2790e1ca03a5608e43fb0ee84dba04aae2757215b8f043"
	I1018 17:14:59.384230   11020 cri.go:89] found id: "246aa3ddddf57033502d5fd5679ade1ae4e79cefdfdc7645841ea4f17a3e0313"
	I1018 17:14:59.384233   11020 cri.go:89] found id: ""
	I1018 17:14:59.384286   11020 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 17:14:59.398467   11020 out.go:203] 
	W1018 17:14:59.401326   11020 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:14:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:14:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 17:14:59.401350   11020 out.go:285] * 
	* 
	W1018 17:14:59.676625   11020 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 17:14:59.679642   11020 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-164474 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.57s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 9.42551ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-fwkz8" [d12f3e97-a0a1-4ac6-aa88-1e38730ecf05] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003228958s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-6x6dm" [7cc511f1-c2a0-4516-85b6-eee6876bc7ae] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003592504s
addons_test.go:392: (dbg) Run:  kubectl --context addons-164474 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-164474 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-164474 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.630607401s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-164474 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-164474 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-164474 addons disable registry --alsologtostderr -v=1: exit status 11 (362.207693ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 17:15:24.468698   12014 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:15:24.471537   12014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:15:24.471579   12014 out.go:374] Setting ErrFile to fd 2...
	I1018 17:15:24.471610   12014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:15:24.472010   12014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:15:24.472457   12014 mustload.go:65] Loading cluster: addons-164474
	I1018 17:15:24.473254   12014 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:15:24.473278   12014 addons.go:606] checking whether the cluster is paused
	I1018 17:15:24.473458   12014 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:15:24.473480   12014 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:15:24.474151   12014 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:15:24.511701   12014 ssh_runner.go:195] Run: systemctl --version
	I1018 17:15:24.511752   12014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:15:24.550263   12014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:15:24.661739   12014 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:15:24.661826   12014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:15:24.734838   12014 cri.go:89] found id: "968c95a146a7fe08d5189eee29bd9582f2894b5d0f04e78e794058e86e194f17"
	I1018 17:15:24.734858   12014 cri.go:89] found id: "7657f768a8a9ac41bcd1f5e7a196579a7dcf31f08b605bba0bd11acb46369892"
	I1018 17:15:24.734862   12014 cri.go:89] found id: "a4777ba56bbe130ce2d0759f981f7a5a7a81a6f76b26c9602759d75786f28075"
	I1018 17:15:24.734866   12014 cri.go:89] found id: "cdf72845ca4f04b7f38a96e8e2bc2c5bff55db097097fe86438572754061e4d1"
	I1018 17:15:24.734869   12014 cri.go:89] found id: "cbf41849e12c028d15eee86acc3c0fcaf5d31af35d656b7935de4a45730fb182"
	I1018 17:15:24.734873   12014 cri.go:89] found id: "cd1c762de0b5dd26a00d004eb60c3a0356920d2d898bf210120e83239de379d3"
	I1018 17:15:24.734876   12014 cri.go:89] found id: "f97b941babec4dfdf104ffdbe7459e396a64a17a6edfa11989d9170c5b5365e2"
	I1018 17:15:24.734879   12014 cri.go:89] found id: "c763b99ed4a70e785446e888023cdfabc0fdeb6e7dcb1a84844d98d22b841291"
	I1018 17:15:24.734882   12014 cri.go:89] found id: "26297b4bb562054967554961013b4aecf4a819a64b9615266425ddb33797d349"
	I1018 17:15:24.734888   12014 cri.go:89] found id: "14f2f76f82dc964b8b157e088100913e80feaa2be642ecc8b72fea78bd2a0ed1"
	I1018 17:15:24.734892   12014 cri.go:89] found id: "901f9bb2898fac636a6903ad516f9b140591198721e4e2bfd30c9ab9155a01ed"
	I1018 17:15:24.734895   12014 cri.go:89] found id: "8a59f8ac6ef2822e7088c9cd1a68272c147739f96eaf27abf4a85d43c140b0ea"
	I1018 17:15:24.734903   12014 cri.go:89] found id: "ce684ca523f08f4af3d1134e239085b099cc9e2cd0f8679963ba4f111fcf7567"
	I1018 17:15:24.734906   12014 cri.go:89] found id: "f402fe3063f55e7003a2aaac453c55c6b2139f8fa75d1a062b447a4a5a8f278c"
	I1018 17:15:24.734909   12014 cri.go:89] found id: "6865806b912ab6d902d766fb60959288c01cc7c01f0f6d41ece13a1484e43f45"
	I1018 17:15:24.734915   12014 cri.go:89] found id: "8d07fa8a1c45fb2b7f3f20b332023c1b057391cd1e4435eb47db001464e9ada7"
	I1018 17:15:24.734918   12014 cri.go:89] found id: "ece6fd8e36b7414b9ea8a96fa9d85543498f89e17705fa3bc262b1570f482b24"
	I1018 17:15:24.734922   12014 cri.go:89] found id: "d12b84a60111629c5268442b96bd59e440c9aec3f86f326d9528b07daa476596"
	I1018 17:15:24.734925   12014 cri.go:89] found id: "d87115dc1b972147e18ebd00d21f7d791e5831c69fbef5f5e25fb2fade668bf7"
	I1018 17:15:24.734928   12014 cri.go:89] found id: "07f016f168b62771c5ab60ab8215041fcead58a20ef1da5932bcb8d6da58077f"
	I1018 17:15:24.734933   12014 cri.go:89] found id: "f085bccd65219cd8bb8d59ffcc8bee71589bead44d17e3e6fe5269fe6781f2f3"
	I1018 17:15:24.734936   12014 cri.go:89] found id: "4a1b92f8cd14a17c1e2790e1ca03a5608e43fb0ee84dba04aae2757215b8f043"
	I1018 17:15:24.734939   12014 cri.go:89] found id: "246aa3ddddf57033502d5fd5679ade1ae4e79cefdfdc7645841ea4f17a3e0313"
	I1018 17:15:24.734941   12014 cri.go:89] found id: ""
	I1018 17:15:24.734993   12014 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 17:15:24.752646   12014 out.go:203] 
	W1018 17:15:24.755677   12014 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:15:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:15:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 17:15:24.755719   12014 out.go:285] * 
	* 
	W1018 17:15:24.760043   12014 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 17:15:24.763061   12014 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-164474 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.35s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.52s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.94538ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-164474
addons_test.go:332: (dbg) Run:  kubectl --context addons-164474 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-164474 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-164474 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (273.897834ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 17:16:26.129562   13617 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:16:26.129816   13617 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:16:26.129830   13617 out.go:374] Setting ErrFile to fd 2...
	I1018 17:16:26.129837   13617 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:16:26.130613   13617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:16:26.130965   13617 mustload.go:65] Loading cluster: addons-164474
	I1018 17:16:26.131457   13617 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:16:26.131517   13617 addons.go:606] checking whether the cluster is paused
	I1018 17:16:26.131677   13617 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:16:26.131735   13617 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:16:26.132253   13617 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:16:26.162070   13617 ssh_runner.go:195] Run: systemctl --version
	I1018 17:16:26.162119   13617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:16:26.188343   13617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:16:26.291299   13617 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:16:26.291381   13617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:16:26.324446   13617 cri.go:89] found id: "968c95a146a7fe08d5189eee29bd9582f2894b5d0f04e78e794058e86e194f17"
	I1018 17:16:26.324464   13617 cri.go:89] found id: "7657f768a8a9ac41bcd1f5e7a196579a7dcf31f08b605bba0bd11acb46369892"
	I1018 17:16:26.324468   13617 cri.go:89] found id: "a4777ba56bbe130ce2d0759f981f7a5a7a81a6f76b26c9602759d75786f28075"
	I1018 17:16:26.324472   13617 cri.go:89] found id: "cdf72845ca4f04b7f38a96e8e2bc2c5bff55db097097fe86438572754061e4d1"
	I1018 17:16:26.324475   13617 cri.go:89] found id: "cbf41849e12c028d15eee86acc3c0fcaf5d31af35d656b7935de4a45730fb182"
	I1018 17:16:26.324479   13617 cri.go:89] found id: "cd1c762de0b5dd26a00d004eb60c3a0356920d2d898bf210120e83239de379d3"
	I1018 17:16:26.324482   13617 cri.go:89] found id: "f97b941babec4dfdf104ffdbe7459e396a64a17a6edfa11989d9170c5b5365e2"
	I1018 17:16:26.324485   13617 cri.go:89] found id: "c763b99ed4a70e785446e888023cdfabc0fdeb6e7dcb1a84844d98d22b841291"
	I1018 17:16:26.324488   13617 cri.go:89] found id: "26297b4bb562054967554961013b4aecf4a819a64b9615266425ddb33797d349"
	I1018 17:16:26.324494   13617 cri.go:89] found id: "14f2f76f82dc964b8b157e088100913e80feaa2be642ecc8b72fea78bd2a0ed1"
	I1018 17:16:26.324498   13617 cri.go:89] found id: "901f9bb2898fac636a6903ad516f9b140591198721e4e2bfd30c9ab9155a01ed"
	I1018 17:16:26.324501   13617 cri.go:89] found id: "8a59f8ac6ef2822e7088c9cd1a68272c147739f96eaf27abf4a85d43c140b0ea"
	I1018 17:16:26.324505   13617 cri.go:89] found id: "ce684ca523f08f4af3d1134e239085b099cc9e2cd0f8679963ba4f111fcf7567"
	I1018 17:16:26.324508   13617 cri.go:89] found id: "f402fe3063f55e7003a2aaac453c55c6b2139f8fa75d1a062b447a4a5a8f278c"
	I1018 17:16:26.324511   13617 cri.go:89] found id: "6865806b912ab6d902d766fb60959288c01cc7c01f0f6d41ece13a1484e43f45"
	I1018 17:16:26.324516   13617 cri.go:89] found id: "8d07fa8a1c45fb2b7f3f20b332023c1b057391cd1e4435eb47db001464e9ada7"
	I1018 17:16:26.324519   13617 cri.go:89] found id: "ece6fd8e36b7414b9ea8a96fa9d85543498f89e17705fa3bc262b1570f482b24"
	I1018 17:16:26.324525   13617 cri.go:89] found id: "d12b84a60111629c5268442b96bd59e440c9aec3f86f326d9528b07daa476596"
	I1018 17:16:26.324533   13617 cri.go:89] found id: "d87115dc1b972147e18ebd00d21f7d791e5831c69fbef5f5e25fb2fade668bf7"
	I1018 17:16:26.324536   13617 cri.go:89] found id: "07f016f168b62771c5ab60ab8215041fcead58a20ef1da5932bcb8d6da58077f"
	I1018 17:16:26.324541   13617 cri.go:89] found id: "f085bccd65219cd8bb8d59ffcc8bee71589bead44d17e3e6fe5269fe6781f2f3"
	I1018 17:16:26.324544   13617 cri.go:89] found id: "4a1b92f8cd14a17c1e2790e1ca03a5608e43fb0ee84dba04aae2757215b8f043"
	I1018 17:16:26.324547   13617 cri.go:89] found id: "246aa3ddddf57033502d5fd5679ade1ae4e79cefdfdc7645841ea4f17a3e0313"
	I1018 17:16:26.324550   13617 cri.go:89] found id: ""
	I1018 17:16:26.324605   13617 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 17:16:26.339611   13617 out.go:203] 
	W1018 17:16:26.342514   13617 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:16:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:16:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 17:16:26.342539   13617 out.go:285] * 
	* 
	W1018 17:16:26.346885   13617 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 17:16:26.349857   13617 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-164474 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.52s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-164474 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-164474 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-164474 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [ac8a9086-12c6-417c-98f6-4a177a8b17de] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [ac8a9086-12c6-417c-98f6-4a177a8b17de] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003564139s
I1018 17:15:46.167350    4320 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-164474 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-164474 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.890972659s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-164474 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-164474 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-164474
helpers_test.go:243: (dbg) docker inspect addons-164474:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "31000ccc16f2da54474476b9a5eeb51132587beec766c8579e875c01b1c476ea",
	        "Created": "2025-10-18T17:12:36.608114275Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5474,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T17:12:36.693681146Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/31000ccc16f2da54474476b9a5eeb51132587beec766c8579e875c01b1c476ea/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/31000ccc16f2da54474476b9a5eeb51132587beec766c8579e875c01b1c476ea/hostname",
	        "HostsPath": "/var/lib/docker/containers/31000ccc16f2da54474476b9a5eeb51132587beec766c8579e875c01b1c476ea/hosts",
	        "LogPath": "/var/lib/docker/containers/31000ccc16f2da54474476b9a5eeb51132587beec766c8579e875c01b1c476ea/31000ccc16f2da54474476b9a5eeb51132587beec766c8579e875c01b1c476ea-json.log",
	        "Name": "/addons-164474",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-164474:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-164474",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "31000ccc16f2da54474476b9a5eeb51132587beec766c8579e875c01b1c476ea",
	                "LowerDir": "/var/lib/docker/overlay2/60c2b458f4fb11ddd0cefd6c98eefc86dd6f597e5e6af5b4ba683fc484a932fd-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/60c2b458f4fb11ddd0cefd6c98eefc86dd6f597e5e6af5b4ba683fc484a932fd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/60c2b458f4fb11ddd0cefd6c98eefc86dd6f597e5e6af5b4ba683fc484a932fd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/60c2b458f4fb11ddd0cefd6c98eefc86dd6f597e5e6af5b4ba683fc484a932fd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-164474",
	                "Source": "/var/lib/docker/volumes/addons-164474/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-164474",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-164474",
	                "name.minikube.sigs.k8s.io": "addons-164474",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2f9bac4ff314be3da3d5ff3000a087f1d269302b36a1df4ea82d00b0e76dae49",
	            "SandboxKey": "/var/run/docker/netns/2f9bac4ff314",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-164474": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:f8:ad:9e:06:ff",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "281458d9014b24585c5cceab2454c34e1b72788eb05df25f412bc3f15189db83",
	                    "EndpointID": "da4d366b9ded281de67a1839eb602323deefe10fd96fccfdb22aeeb48db46628",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-164474",
	                        "31000ccc16f2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-164474 -n addons-164474
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-164474 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-164474 logs -n 25: (2.172679598s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-146837                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-146837 │ jenkins │ v1.37.0 │ 18 Oct 25 17:12 UTC │ 18 Oct 25 17:12 UTC │
	│ start   │ --download-only -p binary-mirror-644672 --alsologtostderr --binary-mirror http://127.0.0.1:44133 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-644672   │ jenkins │ v1.37.0 │ 18 Oct 25 17:12 UTC │                     │
	│ delete  │ -p binary-mirror-644672                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-644672   │ jenkins │ v1.37.0 │ 18 Oct 25 17:12 UTC │ 18 Oct 25 17:12 UTC │
	│ addons  │ enable dashboard -p addons-164474                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:12 UTC │                     │
	│ addons  │ disable dashboard -p addons-164474                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:12 UTC │                     │
	│ start   │ -p addons-164474 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:12 UTC │ 18 Oct 25 17:14 UTC │
	│ addons  │ addons-164474 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:14 UTC │                     │
	│ addons  │ addons-164474 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:15 UTC │                     │
	│ addons  │ enable headlamp -p addons-164474 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:15 UTC │                     │
	│ addons  │ addons-164474 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:15 UTC │                     │
	│ addons  │ addons-164474 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:15 UTC │                     │
	│ addons  │ addons-164474 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:15 UTC │                     │
	│ ip      │ addons-164474 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:15 UTC │ 18 Oct 25 17:15 UTC │
	│ addons  │ addons-164474 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:15 UTC │                     │
	│ addons  │ addons-164474 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:15 UTC │                     │
	│ ssh     │ addons-164474 ssh cat /opt/local-path-provisioner/pvc-ae21b147-3096-4e56-ade2-459d3d01d96a_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:15 UTC │ 18 Oct 25 17:15 UTC │
	│ addons  │ addons-164474 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:15 UTC │                     │
	│ addons  │ addons-164474 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:15 UTC │                     │
	│ ssh     │ addons-164474 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:15 UTC │                     │
	│ addons  │ addons-164474 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:16 UTC │                     │
	│ addons  │ addons-164474 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:16 UTC │                     │
	│ addons  │ addons-164474 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:16 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-164474                                                                                                                                                                                                                                                                                                                                                                                           │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:16 UTC │ 18 Oct 25 17:16 UTC │
	│ addons  │ addons-164474 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:16 UTC │                     │
	│ ip      │ addons-164474 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:17 UTC │ 18 Oct 25 17:17 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 17:12:09
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 17:12:09.621667    5077 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:12:09.621843    5077 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:12:09.621872    5077 out.go:374] Setting ErrFile to fd 2...
	I1018 17:12:09.621894    5077 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:12:09.622181    5077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:12:09.622687    5077 out.go:368] Setting JSON to false
	I1018 17:12:09.623453    5077 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3279,"bootTime":1760804251,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 17:12:09.623547    5077 start.go:141] virtualization:  
	I1018 17:12:09.627238    5077 out.go:179] * [addons-164474] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 17:12:09.630379    5077 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 17:12:09.630448    5077 notify.go:220] Checking for updates...
	I1018 17:12:09.636244    5077 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 17:12:09.639243    5077 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:12:09.642208    5077 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 17:12:09.645082    5077 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 17:12:09.648029    5077 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 17:12:09.651192    5077 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 17:12:09.683084    5077 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 17:12:09.683211    5077 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:12:09.744501    5077 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-18 17:12:09.734318466 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:12:09.744612    5077 docker.go:318] overlay module found
	I1018 17:12:09.747678    5077 out.go:179] * Using the docker driver based on user configuration
	I1018 17:12:09.750516    5077 start.go:305] selected driver: docker
	I1018 17:12:09.750539    5077 start.go:925] validating driver "docker" against <nil>
	I1018 17:12:09.750553    5077 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 17:12:09.751286    5077 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:12:09.806318    5077 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-18 17:12:09.797250313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:12:09.806473    5077 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 17:12:09.806696    5077 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:12:09.809565    5077 out.go:179] * Using Docker driver with root privileges
	I1018 17:12:09.812466    5077 cni.go:84] Creating CNI manager for ""
	I1018 17:12:09.812540    5077 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 17:12:09.812554    5077 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 17:12:09.812633    5077 start.go:349] cluster config:
	{Name:addons-164474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-164474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1018 17:12:09.817507    5077 out.go:179] * Starting "addons-164474" primary control-plane node in "addons-164474" cluster
	I1018 17:12:09.820411    5077 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:12:09.823474    5077 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:12:09.826285    5077 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:12:09.826341    5077 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 17:12:09.826351    5077 cache.go:58] Caching tarball of preloaded images
	I1018 17:12:09.826440    5077 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:12:09.826450    5077 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:12:09.826794    5077 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/config.json ...
	I1018 17:12:09.826815    5077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/config.json: {Name:mk3348a25a1467de46c94788d07a2cffa213827d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:12:09.826971    5077 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:12:09.843152    5077 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 17:12:09.843268    5077 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 17:12:09.843286    5077 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 17:12:09.843291    5077 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 17:12:09.843298    5077 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 17:12:09.843302    5077 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1018 17:12:27.269089    5077 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1018 17:12:27.269127    5077 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:12:27.269155    5077 start.go:360] acquireMachinesLock for addons-164474: {Name:mkab7365bdd9150f769d9384f833a7496379677e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:12:27.269263    5077 start.go:364] duration metric: took 89.675µs to acquireMachinesLock for "addons-164474"
	I1018 17:12:27.269294    5077 start.go:93] Provisioning new machine with config: &{Name:addons-164474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-164474 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:12:27.269387    5077 start.go:125] createHost starting for "" (driver="docker")
	I1018 17:12:27.272894    5077 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1018 17:12:27.273133    5077 start.go:159] libmachine.API.Create for "addons-164474" (driver="docker")
	I1018 17:12:27.273182    5077 client.go:168] LocalClient.Create starting
	I1018 17:12:27.273305    5077 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem
	I1018 17:12:28.988090    5077 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem
	I1018 17:12:29.713605    5077 cli_runner.go:164] Run: docker network inspect addons-164474 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 17:12:29.729251    5077 cli_runner.go:211] docker network inspect addons-164474 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 17:12:29.729339    5077 network_create.go:284] running [docker network inspect addons-164474] to gather additional debugging logs...
	I1018 17:12:29.729359    5077 cli_runner.go:164] Run: docker network inspect addons-164474
	W1018 17:12:29.744927    5077 cli_runner.go:211] docker network inspect addons-164474 returned with exit code 1
	I1018 17:12:29.745021    5077 network_create.go:287] error running [docker network inspect addons-164474]: docker network inspect addons-164474: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-164474 not found
	I1018 17:12:29.745035    5077 network_create.go:289] output of [docker network inspect addons-164474]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-164474 not found
	
	** /stderr **
	I1018 17:12:29.745138    5077 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:12:29.761282    5077 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001997fe0}
	I1018 17:12:29.761334    5077 network_create.go:124] attempt to create docker network addons-164474 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1018 17:12:29.761391    5077 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-164474 addons-164474
	I1018 17:12:29.820225    5077 network_create.go:108] docker network addons-164474 192.168.49.0/24 created
	I1018 17:12:29.820252    5077 kic.go:121] calculated static IP "192.168.49.2" for the "addons-164474" container
	I1018 17:12:29.820325    5077 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 17:12:29.835209    5077 cli_runner.go:164] Run: docker volume create addons-164474 --label name.minikube.sigs.k8s.io=addons-164474 --label created_by.minikube.sigs.k8s.io=true
	I1018 17:12:29.853345    5077 oci.go:103] Successfully created a docker volume addons-164474
	I1018 17:12:29.853436    5077 cli_runner.go:164] Run: docker run --rm --name addons-164474-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-164474 --entrypoint /usr/bin/test -v addons-164474:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 17:12:32.115175    5077 cli_runner.go:217] Completed: docker run --rm --name addons-164474-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-164474 --entrypoint /usr/bin/test -v addons-164474:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (2.261703898s)
	I1018 17:12:32.115222    5077 oci.go:107] Successfully prepared a docker volume addons-164474
	I1018 17:12:32.115245    5077 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:12:32.115263    5077 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 17:12:32.115328    5077 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-164474:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 17:12:36.541425    5077 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-164474:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.42605261s)
	I1018 17:12:36.541456    5077 kic.go:203] duration metric: took 4.426190507s to extract preloaded images to volume ...
	W1018 17:12:36.541601    5077 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 17:12:36.541715    5077 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 17:12:36.593589    5077 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-164474 --name addons-164474 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-164474 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-164474 --network addons-164474 --ip 192.168.49.2 --volume addons-164474:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 17:12:36.937166    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Running}}
	I1018 17:12:36.961301    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:12:36.985066    5077 cli_runner.go:164] Run: docker exec addons-164474 stat /var/lib/dpkg/alternatives/iptables
	I1018 17:12:37.040440    5077 oci.go:144] the created container "addons-164474" has a running status.
	I1018 17:12:37.040471    5077 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa...
	I1018 17:12:38.038559    5077 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 17:12:38.070664    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:12:38.090555    5077 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 17:12:38.090581    5077 kic_runner.go:114] Args: [docker exec --privileged addons-164474 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 17:12:38.132101    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:12:38.149135    5077 machine.go:93] provisionDockerMachine start ...
	I1018 17:12:38.149238    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:12:38.166517    5077 main.go:141] libmachine: Using SSH client type: native
	I1018 17:12:38.166841    5077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 17:12:38.166856    5077 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:12:38.167524    5077 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 17:12:41.316424    5077 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-164474
	
	I1018 17:12:41.316447    5077 ubuntu.go:182] provisioning hostname "addons-164474"
	I1018 17:12:41.316511    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:12:41.333766    5077 main.go:141] libmachine: Using SSH client type: native
	I1018 17:12:41.334068    5077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 17:12:41.334085    5077 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-164474 && echo "addons-164474" | sudo tee /etc/hostname
	I1018 17:12:41.489500    5077 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-164474
	
	I1018 17:12:41.489571    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:12:41.507243    5077 main.go:141] libmachine: Using SSH client type: native
	I1018 17:12:41.507543    5077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 17:12:41.507558    5077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-164474' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-164474/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-164474' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:12:41.652846    5077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:12:41.652873    5077 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:12:41.652892    5077 ubuntu.go:190] setting up certificates
	I1018 17:12:41.652901    5077 provision.go:84] configureAuth start
	I1018 17:12:41.652984    5077 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-164474
	I1018 17:12:41.672808    5077 provision.go:143] copyHostCerts
	I1018 17:12:41.672900    5077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:12:41.673043    5077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:12:41.673114    5077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:12:41.673164    5077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.addons-164474 san=[127.0.0.1 192.168.49.2 addons-164474 localhost minikube]
	I1018 17:12:42.112564    5077 provision.go:177] copyRemoteCerts
	I1018 17:12:42.112641    5077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:12:42.112690    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:12:42.136605    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:12:42.249792    5077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:12:42.268559    5077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 17:12:42.287022    5077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:12:42.306486    5077 provision.go:87] duration metric: took 653.51536ms to configureAuth
	I1018 17:12:42.306561    5077 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:12:42.306766    5077 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:12:42.306879    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:12:42.324977    5077 main.go:141] libmachine: Using SSH client type: native
	I1018 17:12:42.325296    5077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 17:12:42.325316    5077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:12:42.577432    5077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:12:42.577454    5077 machine.go:96] duration metric: took 4.428298838s to provisionDockerMachine
	I1018 17:12:42.577464    5077 client.go:171] duration metric: took 15.304270688s to LocalClient.Create
	I1018 17:12:42.577477    5077 start.go:167] duration metric: took 15.304344831s to libmachine.API.Create "addons-164474"
	I1018 17:12:42.577485    5077 start.go:293] postStartSetup for "addons-164474" (driver="docker")
	I1018 17:12:42.577495    5077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:12:42.577560    5077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:12:42.577608    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:12:42.596842    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:12:42.700607    5077 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:12:42.703748    5077 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:12:42.703776    5077 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:12:42.703787    5077 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:12:42.703852    5077 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:12:42.703880    5077 start.go:296] duration metric: took 126.38925ms for postStartSetup
	I1018 17:12:42.704184    5077 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-164474
	I1018 17:12:42.720667    5077 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/config.json ...
	I1018 17:12:42.720977    5077 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:12:42.721026    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:12:42.737628    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:12:42.837832    5077 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:12:42.842600    5077 start.go:128] duration metric: took 15.573199139s to createHost
	I1018 17:12:42.842624    5077 start.go:83] releasing machines lock for "addons-164474", held for 15.57334772s
	I1018 17:12:42.842693    5077 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-164474
	I1018 17:12:42.859797    5077 ssh_runner.go:195] Run: cat /version.json
	I1018 17:12:42.859856    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:12:42.860106    5077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:12:42.860166    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:12:42.879010    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:12:42.897077    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:12:43.072311    5077 ssh_runner.go:195] Run: systemctl --version
	I1018 17:12:43.079106    5077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:12:43.115024    5077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:12:43.119511    5077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:12:43.119625    5077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:12:43.149819    5077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 17:12:43.149837    5077 start.go:495] detecting cgroup driver to use...
	I1018 17:12:43.149885    5077 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:12:43.149943    5077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:12:43.167266    5077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:12:43.180673    5077 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:12:43.180792    5077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:12:43.198285    5077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:12:43.216792    5077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:12:43.328468    5077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:12:43.459500    5077 docker.go:234] disabling docker service ...
	I1018 17:12:43.459573    5077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:12:43.479911    5077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:12:43.493178    5077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:12:43.619470    5077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:12:43.737101    5077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:12:43.749643    5077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:12:43.763411    5077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:12:43.763521    5077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:12:43.772425    5077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:12:43.772494    5077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:12:43.781476    5077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:12:43.790204    5077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:12:43.798849    5077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:12:43.806861    5077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:12:43.815102    5077 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:12:43.827722    5077 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:12:43.837302    5077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:12:43.845016    5077 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 17:12:43.845105    5077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 17:12:43.859099    5077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:12:43.866681    5077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:12:43.979821    5077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:12:44.105409    5077 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:12:44.105509    5077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:12:44.109267    5077 start.go:563] Will wait 60s for crictl version
	I1018 17:12:44.109350    5077 ssh_runner.go:195] Run: which crictl
	I1018 17:12:44.112915    5077 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:12:44.136251    5077 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:12:44.136403    5077 ssh_runner.go:195] Run: crio --version
	I1018 17:12:44.165110    5077 ssh_runner.go:195] Run: crio --version
	I1018 17:12:44.200662    5077 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:12:44.203565    5077 cli_runner.go:164] Run: docker network inspect addons-164474 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:12:44.219397    5077 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:12:44.223299    5077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:12:44.233896    5077 kubeadm.go:883] updating cluster {Name:addons-164474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-164474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 17:12:44.234014    5077 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:12:44.234076    5077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 17:12:44.268341    5077 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 17:12:44.268366    5077 crio.go:433] Images already preloaded, skipping extraction
	I1018 17:12:44.268423    5077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 17:12:44.293173    5077 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 17:12:44.293196    5077 cache_images.go:85] Images are preloaded, skipping loading
	I1018 17:12:44.293203    5077 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 17:12:44.293285    5077 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-164474 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-164474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:12:44.293367    5077 ssh_runner.go:195] Run: crio config
	I1018 17:12:44.354410    5077 cni.go:84] Creating CNI manager for ""
	I1018 17:12:44.354490    5077 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 17:12:44.354519    5077 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 17:12:44.354573    5077 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-164474 NodeName:addons-164474 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 17:12:44.354735    5077 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-164474"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 17:12:44.354833    5077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:12:44.362591    5077 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:12:44.362660    5077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 17:12:44.370387    5077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 17:12:44.383292    5077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:12:44.395900    5077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1018 17:12:44.408204    5077 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 17:12:44.411662    5077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:12:44.420965    5077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:12:44.534582    5077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:12:44.550424    5077 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474 for IP: 192.168.49.2
	I1018 17:12:44.550452    5077 certs.go:195] generating shared ca certs ...
	I1018 17:12:44.550468    5077 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:12:44.550669    5077 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:12:45.001586    5077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt ...
	I1018 17:12:45.001620    5077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt: {Name:mkc15b5d821f189f0721cb2e35bd5820e47a127a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:12:45.001848    5077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key ...
	I1018 17:12:45.001865    5077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key: {Name:mk8217c17a1e8278b02fa13c181862df662fbda0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:12:45.001956    5077 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:12:45.441448    5077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt ...
	I1018 17:12:45.441487    5077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt: {Name:mk328c040b1092e413560a324ebe5933d3c0ea7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:12:45.441671    5077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key ...
	I1018 17:12:45.441683    5077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key: {Name:mkcc8d257200bfcc88192f5105245d9327105cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:12:45.441761    5077 certs.go:257] generating profile certs ...
	I1018 17:12:45.441825    5077 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.key
	I1018 17:12:45.441841    5077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt with IP's: []
	I1018 17:12:45.780822    5077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt ...
	I1018 17:12:45.780852    5077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: {Name:mkcd971006954b338595f6ebdf5b64d252e82cdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:12:45.781049    5077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.key ...
	I1018 17:12:45.781063    5077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.key: {Name:mk89db0728ef38a8fa29dc5172d437227dac855b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:12:45.781146    5077 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/apiserver.key.601289e7
	I1018 17:12:45.781166    5077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/apiserver.crt.601289e7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1018 17:12:46.656809    5077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/apiserver.crt.601289e7 ...
	I1018 17:12:46.656847    5077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/apiserver.crt.601289e7: {Name:mkfe5d4799bd0f5c8315bf8173aa80da66216675 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:12:46.657045    5077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/apiserver.key.601289e7 ...
	I1018 17:12:46.657062    5077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/apiserver.key.601289e7: {Name:mkfa22d393ecd94f5140b71eb78e9296782027b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:12:46.657146    5077 certs.go:382] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/apiserver.crt.601289e7 -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/apiserver.crt
	I1018 17:12:46.657238    5077 certs.go:386] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/apiserver.key.601289e7 -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/apiserver.key
	I1018 17:12:46.657296    5077 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/proxy-client.key
	I1018 17:12:46.657317    5077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/proxy-client.crt with IP's: []
	I1018 17:12:47.154729    5077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/proxy-client.crt ...
	I1018 17:12:47.154759    5077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/proxy-client.crt: {Name:mk474d7203f62648592ecd8e7d65433d3b3f1580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:12:47.154946    5077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/proxy-client.key ...
	I1018 17:12:47.154959    5077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/proxy-client.key: {Name:mk637a4d7bd3076882b96bf3ead69e44353cab76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:12:47.155171    5077 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:12:47.155214    5077 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:12:47.155247    5077 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:12:47.155279    5077 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:12:47.155863    5077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:12:47.174366    5077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:12:47.192238    5077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:12:47.209863    5077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:12:47.227449    5077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 17:12:47.244844    5077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 17:12:47.263272    5077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 17:12:47.281667    5077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 17:12:47.298545    5077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:12:47.315668    5077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 17:12:47.328109    5077 ssh_runner.go:195] Run: openssl version
	I1018 17:12:47.334469    5077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:12:47.342732    5077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:12:47.346325    5077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:12:47.346446    5077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:12:47.387214    5077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:12:47.395381    5077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:12:47.398826    5077 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 17:12:47.398870    5077 kubeadm.go:400] StartCluster: {Name:addons-164474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-164474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:12:47.398947    5077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:12:47.399014    5077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:12:47.427922    5077 cri.go:89] found id: ""
	I1018 17:12:47.428019    5077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 17:12:47.436715    5077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 17:12:47.444407    5077 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 17:12:47.444486    5077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 17:12:47.451903    5077 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 17:12:47.451967    5077 kubeadm.go:157] found existing configuration files:
	
	I1018 17:12:47.452024    5077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 17:12:47.459571    5077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 17:12:47.459667    5077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 17:12:47.467138    5077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 17:12:47.474559    5077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 17:12:47.474655    5077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 17:12:47.482257    5077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 17:12:47.490623    5077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 17:12:47.490713    5077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 17:12:47.498656    5077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 17:12:47.507073    5077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 17:12:47.507165    5077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 17:12:47.514830    5077 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 17:12:47.559238    5077 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 17:12:47.559476    5077 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 17:12:47.586887    5077 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 17:12:47.587002    5077 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 17:12:47.587059    5077 kubeadm.go:318] OS: Linux
	I1018 17:12:47.587135    5077 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 17:12:47.587210    5077 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 17:12:47.587281    5077 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 17:12:47.587356    5077 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 17:12:47.587431    5077 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 17:12:47.587514    5077 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 17:12:47.587586    5077 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 17:12:47.587670    5077 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 17:12:47.587744    5077 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 17:12:47.650112    5077 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 17:12:47.650278    5077 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 17:12:47.650412    5077 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 17:12:47.657668    5077 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 17:12:47.663816    5077 out.go:252]   - Generating certificates and keys ...
	I1018 17:12:47.663983    5077 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 17:12:47.664085    5077 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 17:12:48.489879    5077 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 17:12:48.982468    5077 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 17:12:49.433194    5077 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 17:12:49.948032    5077 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 17:12:50.067272    5077 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 17:12:50.067593    5077 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-164474 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 17:12:50.194869    5077 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 17:12:50.195245    5077 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-164474 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 17:12:51.225657    5077 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 17:12:51.554944    5077 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 17:12:52.101880    5077 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 17:12:52.102245    5077 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 17:12:52.639120    5077 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 17:12:53.350459    5077 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 17:12:53.440832    5077 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 17:12:53.970719    5077 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 17:12:54.244861    5077 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 17:12:54.245685    5077 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 17:12:54.249671    5077 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 17:12:54.253100    5077 out.go:252]   - Booting up control plane ...
	I1018 17:12:54.253208    5077 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 17:12:54.253296    5077 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 17:12:54.254040    5077 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 17:12:54.272180    5077 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 17:12:54.272298    5077 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 17:12:54.279288    5077 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 17:12:54.279551    5077 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 17:12:54.279728    5077 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 17:12:54.403451    5077 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 17:12:54.403605    5077 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 17:12:55.405425    5077 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002320192s
	I1018 17:12:55.408744    5077 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 17:12:55.408869    5077 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1018 17:12:55.409306    5077 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 17:12:55.409403    5077 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 17:12:56.501030    5077 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.091552133s
	I1018 17:12:58.349704    5077 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.940921357s
	I1018 17:13:00.411847    5077 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.002705838s
	I1018 17:13:00.437178    5077 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 17:13:00.465226    5077 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 17:13:00.482839    5077 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 17:13:00.483053    5077 kubeadm.go:318] [mark-control-plane] Marking the node addons-164474 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 17:13:00.500261    5077 kubeadm.go:318] [bootstrap-token] Using token: sagae6.mnuvln85kenb52pb
	I1018 17:13:00.503203    5077 out.go:252]   - Configuring RBAC rules ...
	I1018 17:13:00.503342    5077 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 17:13:00.514958    5077 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 17:13:00.524308    5077 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 17:13:00.529226    5077 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 17:13:00.534700    5077 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 17:13:00.541297    5077 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 17:13:00.821649    5077 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 17:13:01.263711    5077 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 17:13:01.819779    5077 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 17:13:01.820789    5077 kubeadm.go:318] 
	I1018 17:13:01.820865    5077 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 17:13:01.820878    5077 kubeadm.go:318] 
	I1018 17:13:01.820978    5077 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 17:13:01.820989    5077 kubeadm.go:318] 
	I1018 17:13:01.821015    5077 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 17:13:01.821080    5077 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 17:13:01.821142    5077 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 17:13:01.821153    5077 kubeadm.go:318] 
	I1018 17:13:01.821210    5077 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 17:13:01.821220    5077 kubeadm.go:318] 
	I1018 17:13:01.821270    5077 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 17:13:01.821278    5077 kubeadm.go:318] 
	I1018 17:13:01.821332    5077 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 17:13:01.821413    5077 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 17:13:01.821490    5077 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 17:13:01.821499    5077 kubeadm.go:318] 
	I1018 17:13:01.821589    5077 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 17:13:01.821673    5077 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 17:13:01.821681    5077 kubeadm.go:318] 
	I1018 17:13:01.821769    5077 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token sagae6.mnuvln85kenb52pb \
	I1018 17:13:01.821879    5077 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d0244c5bf86cdf97546c6a22045cb6ed9d7ead524d9c98d9ca35da77d5d7a04d \
	I1018 17:13:01.821905    5077 kubeadm.go:318] 	--control-plane 
	I1018 17:13:01.821912    5077 kubeadm.go:318] 
	I1018 17:13:01.822001    5077 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 17:13:01.822011    5077 kubeadm.go:318] 
	I1018 17:13:01.822098    5077 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token sagae6.mnuvln85kenb52pb \
	I1018 17:13:01.822469    5077 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d0244c5bf86cdf97546c6a22045cb6ed9d7ead524d9c98d9ca35da77d5d7a04d 
	I1018 17:13:01.825810    5077 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 17:13:01.826101    5077 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 17:13:01.826228    5077 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 17:13:01.826255    5077 cni.go:84] Creating CNI manager for ""
	I1018 17:13:01.826266    5077 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 17:13:01.831157    5077 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 17:13:01.834000    5077 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 17:13:01.837985    5077 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 17:13:01.838004    5077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 17:13:01.851399    5077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 17:13:02.138247    5077 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 17:13:02.138382    5077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 17:13:02.138478    5077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-164474 minikube.k8s.io/updated_at=2025_10_18T17_13_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404 minikube.k8s.io/name=addons-164474 minikube.k8s.io/primary=true
	I1018 17:13:02.285593    5077 ops.go:34] apiserver oom_adj: -16
	I1018 17:13:02.285711    5077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 17:13:02.786818    5077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 17:13:03.285819    5077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 17:13:03.786647    5077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 17:13:04.285991    5077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 17:13:04.786693    5077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 17:13:05.286574    5077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 17:13:05.786368    5077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 17:13:06.286775    5077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 17:13:06.382716    5077 kubeadm.go:1113] duration metric: took 4.244381969s to wait for elevateKubeSystemPrivileges
	I1018 17:13:06.382745    5077 kubeadm.go:402] duration metric: took 18.983878488s to StartCluster
	I1018 17:13:06.382762    5077 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:13:06.382882    5077 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:13:06.383219    5077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:13:06.383403    5077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 17:13:06.383428    5077 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:13:06.383664    5077 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:13:06.383703    5077 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 17:13:06.383784    5077 addons.go:69] Setting yakd=true in profile "addons-164474"
	I1018 17:13:06.383803    5077 addons.go:238] Setting addon yakd=true in "addons-164474"
	I1018 17:13:06.383832    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.384294    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.384604    5077 addons.go:69] Setting inspektor-gadget=true in profile "addons-164474"
	I1018 17:13:06.384627    5077 addons.go:238] Setting addon inspektor-gadget=true in "addons-164474"
	I1018 17:13:06.384648    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.385072    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.385387    5077 addons.go:69] Setting metrics-server=true in profile "addons-164474"
	I1018 17:13:06.385410    5077 addons.go:238] Setting addon metrics-server=true in "addons-164474"
	I1018 17:13:06.385433    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.385832    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.385992    5077 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-164474"
	I1018 17:13:06.386026    5077 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-164474"
	I1018 17:13:06.386052    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.386438    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.389587    5077 addons.go:69] Setting cloud-spanner=true in profile "addons-164474"
	I1018 17:13:06.389617    5077 addons.go:238] Setting addon cloud-spanner=true in "addons-164474"
	I1018 17:13:06.389647    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.390067    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.391205    5077 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-164474"
	I1018 17:13:06.391230    5077 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-164474"
	I1018 17:13:06.391265    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.391684    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.395214    5077 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-164474"
	I1018 17:13:06.395291    5077 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-164474"
	I1018 17:13:06.395324    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.395812    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.400747    5077 addons.go:69] Setting default-storageclass=true in profile "addons-164474"
	I1018 17:13:06.400787    5077 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-164474"
	I1018 17:13:06.401238    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.435000    5077 addons.go:69] Setting registry=true in profile "addons-164474"
	I1018 17:13:06.435091    5077 addons.go:238] Setting addon registry=true in "addons-164474"
	I1018 17:13:06.435152    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.435644    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.442188    5077 addons.go:69] Setting gcp-auth=true in profile "addons-164474"
	I1018 17:13:06.442230    5077 mustload.go:65] Loading cluster: addons-164474
	I1018 17:13:06.442436    5077 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:13:06.442706    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.452686    5077 addons.go:69] Setting registry-creds=true in profile "addons-164474"
	I1018 17:13:06.452726    5077 addons.go:238] Setting addon registry-creds=true in "addons-164474"
	I1018 17:13:06.452761    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.453240    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.456063    5077 addons.go:69] Setting ingress=true in profile "addons-164474"
	I1018 17:13:06.456104    5077 addons.go:238] Setting addon ingress=true in "addons-164474"
	I1018 17:13:06.456147    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.456618    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.477305    5077 addons.go:69] Setting ingress-dns=true in profile "addons-164474"
	I1018 17:13:06.477338    5077 addons.go:238] Setting addon ingress-dns=true in "addons-164474"
	I1018 17:13:06.477470    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.478191    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.484906    5077 addons.go:69] Setting storage-provisioner=true in profile "addons-164474"
	I1018 17:13:06.485008    5077 addons.go:238] Setting addon storage-provisioner=true in "addons-164474"
	I1018 17:13:06.485058    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.485539    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.498185    5077 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-164474"
	I1018 17:13:06.498220    5077 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-164474"
	I1018 17:13:06.498321    5077 out.go:179] * Verifying Kubernetes components...
	I1018 17:13:06.498577    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.529110    5077 addons.go:69] Setting volcano=true in profile "addons-164474"
	I1018 17:13:06.529151    5077 addons.go:238] Setting addon volcano=true in "addons-164474"
	I1018 17:13:06.529184    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.529627    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.545299    5077 addons.go:69] Setting volumesnapshots=true in profile "addons-164474"
	I1018 17:13:06.545328    5077 addons.go:238] Setting addon volumesnapshots=true in "addons-164474"
	I1018 17:13:06.545369    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.545878    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.606490    5077 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 17:13:06.619341    5077 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 17:13:06.626233    5077 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 17:13:06.626374    5077 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 17:13:06.626431    5077 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 17:13:06.626554    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.626830    5077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:13:06.627144    5077 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 17:13:06.627164    5077 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 17:13:06.627227    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.640650    5077 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 17:13:06.641273    5077 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 17:13:06.659856    5077 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 17:13:06.660003    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 17:13:06.660423    5077 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 17:13:06.666113    5077 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 17:13:06.666197    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 17:13:06.666357    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.660496    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.681811    5077 addons.go:238] Setting addon default-storageclass=true in "addons-164474"
	I1018 17:13:06.681858    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.682293    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.685853    5077 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 17:13:06.685880    5077 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 17:13:06.685940    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.693472    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.695425    5077 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 17:13:06.698554    5077 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 17:13:06.698618    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 17:13:06.698715    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.711663    5077 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 17:13:06.715302    5077 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 17:13:06.718899    5077 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 17:13:06.721875    5077 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 17:13:06.722895    5077 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-164474"
	I1018 17:13:06.722930    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.723327    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.755522    5077 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 17:13:06.795307    5077 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 17:13:06.797366    5077 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	W1018 17:13:06.797679    5077 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1018 17:13:06.809044    5077 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 17:13:06.811542    5077 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 17:13:06.811602    5077 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 17:13:06.814777    5077 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 17:13:06.812089    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:06.814738    5077 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 17:13:06.814758    5077 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 17:13:06.820600    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 17:13:06.820718    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.837607    5077 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 17:13:06.837853    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 17:13:06.838043    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.848551    5077 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 17:13:06.848572    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 17:13:06.848635    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.881318    5077 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 17:13:06.881341    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 17:13:06.881402    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.887795    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:06.893440    5077 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 17:13:06.897848    5077 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 17:13:06.897924    5077 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 17:13:06.905134    5077 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 17:13:06.905158    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 17:13:06.905225    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.915167    5077 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 17:13:06.915205    5077 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 17:13:06.915291    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.924676    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:06.932977    5077 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 17:13:06.933001    5077 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 17:13:06.933062    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.933213    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:06.935785    5077 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 17:13:06.938654    5077 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 17:13:06.938677    5077 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 17:13:06.938744    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.961564    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:06.965326    5077 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 17:13:06.972321    5077 out.go:179]   - Using image docker.io/busybox:stable
	I1018 17:13:06.975154    5077 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 17:13:06.975184    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 17:13:06.975244    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.980873    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:07.029457    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:07.057634    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:07.059916    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:07.074458    5077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 17:13:07.117275    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:07.124488    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:07.125591    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:07.126281    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:07.127203    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:07.130995    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	W1018 17:13:07.137812    5077 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 17:13:07.137857    5077 retry.go:31] will retry after 339.387014ms: ssh: handshake failed: EOF
	W1018 17:13:07.138870    5077 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 17:13:07.138895    5077 retry.go:31] will retry after 251.728274ms: ssh: handshake failed: EOF
	I1018 17:13:07.269583    5077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:13:07.626672    5077 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 17:13:07.626703    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 17:13:07.697632    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 17:13:07.713029    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 17:13:07.769073    5077 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 17:13:07.769145    5077 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 17:13:07.801553    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 17:13:07.855864    5077 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 17:13:07.855926    5077 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 17:13:07.866547    5077 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:13:07.866613    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 17:13:07.871697    5077 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 17:13:07.871763    5077 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 17:13:07.875145    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 17:13:07.878798    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 17:13:07.905977    5077 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 17:13:07.906039    5077 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 17:13:07.919786    5077 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 17:13:07.919848    5077 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 17:13:07.944876    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 17:13:07.967894    5077 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 17:13:07.967964    5077 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 17:13:07.971638    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 17:13:07.979771    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 17:13:07.990219    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:13:08.004249    5077 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 17:13:08.004336    5077 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 17:13:08.008340    5077 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 17:13:08.008425    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 17:13:08.091828    5077 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 17:13:08.091901    5077 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 17:13:08.110811    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 17:13:08.133680    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 17:13:08.150149    5077 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 17:13:08.150226    5077 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 17:13:08.160806    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 17:13:08.237516    5077 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 17:13:08.237588    5077 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 17:13:08.296330    5077 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 17:13:08.296398    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 17:13:08.300176    5077 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 17:13:08.300242    5077 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 17:13:08.440506    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 17:13:08.446374    5077 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.371882684s)
	I1018 17:13:08.446533    5077 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1018 17:13:08.446461    5077 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.176855084s)
	I1018 17:13:08.448126    5077 node_ready.go:35] waiting up to 6m0s for node "addons-164474" to be "Ready" ...
	I1018 17:13:08.486470    5077 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 17:13:08.486550    5077 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 17:13:08.504452    5077 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 17:13:08.504515    5077 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 17:13:08.705046    5077 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 17:13:08.705115    5077 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 17:13:08.740199    5077 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 17:13:08.740274    5077 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 17:13:08.936478    5077 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 17:13:08.936545    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 17:13:08.951393    5077 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-164474" context rescaled to 1 replicas
	I1018 17:13:08.986569    5077 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 17:13:08.986642    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 17:13:09.200327    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 17:13:09.298167    5077 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 17:13:09.298235    5077 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 17:13:09.474354    5077 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 17:13:09.474420    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 17:13:09.779653    5077 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 17:13:09.779723    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 17:13:09.847795    5077 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 17:13:09.847865    5077 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 17:13:09.986318    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 17:13:10.154202    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.456496614s)
	W1018 17:13:10.490523    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:11.502374    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.789267682s)
	I1018 17:13:11.502473    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.700849438s)
	I1018 17:13:11.502736    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.627529941s)
	I1018 17:13:11.502796    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.62394114s)
	I1018 17:13:11.502833    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.557852447s)
	I1018 17:13:11.693266    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.721551766s)
	I1018 17:13:12.777152    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.797299145s)
	I1018 17:13:12.777318    5077 addons.go:479] Verifying addon ingress=true in "addons-164474"
	I1018 17:13:12.777484    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.666600622s)
	I1018 17:13:12.777521    5077 addons.go:479] Verifying addon metrics-server=true in "addons-164474"
	I1018 17:13:12.777667    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.643925016s)
	I1018 17:13:12.777249    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.786960383s)
	W1018 17:13:12.777730    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:12.777748    5077 retry.go:31] will retry after 149.54801ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:12.777789    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.616909525s)
	I1018 17:13:12.777798    5077 addons.go:479] Verifying addon registry=true in "addons-164474"
	I1018 17:13:12.777923    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.337338894s)
	I1018 17:13:12.778285    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.577858096s)
	W1018 17:13:12.778332    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 17:13:12.778346    5077 retry.go:31] will retry after 275.628822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 17:13:12.781551    5077 out.go:179] * Verifying registry addon...
	I1018 17:13:12.781713    5077 out.go:179] * Verifying ingress addon...
	I1018 17:13:12.781759    5077 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-164474 service yakd-dashboard -n yakd-dashboard
	
	I1018 17:13:12.786023    5077 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 17:13:12.787087    5077 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 17:13:12.790733    5077 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 17:13:12.790806    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:12.791002    5077 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 17:13:12.791036    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:12.927512    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 17:13:12.954859    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:13.054922    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 17:13:13.067376    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.080953665s)
	I1018 17:13:13.067475    5077 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-164474"
	I1018 17:13:13.072401    5077 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 17:13:13.076098    5077 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 17:13:13.085487    5077 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 17:13:13.085562    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:13.291504    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:13.291706    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:13.588650    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:13.792011    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:13.792501    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:13.938086    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.01048657s)
	W1018 17:13:13.938165    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:13.938193    5077 retry.go:31] will retry after 378.055853ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:14.080732    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:14.289943    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:14.290143    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:14.307390    5077 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 17:13:14.307487    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:14.316805    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:13:14.328788    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:14.461115    5077 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 17:13:14.474501    5077 addons.go:238] Setting addon gcp-auth=true in "addons-164474"
	I1018 17:13:14.474544    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:14.474980    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:14.496874    5077 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 17:13:14.496949    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:14.520254    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:14.579382    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:14.791434    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:14.792086    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:15.079727    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:15.290105    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:15.290527    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 17:13:15.451484    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:15.579187    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:15.793236    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:15.793689    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:15.969909    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.914893506s)
	I1018 17:13:15.970005    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.653168794s)
	I1018 17:13:15.970077    5077 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.473174477s)
	W1018 17:13:15.970252    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:15.970283    5077 retry.go:31] will retry after 746.477138ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:15.973368    5077 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 17:13:15.976262    5077 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 17:13:15.979197    5077 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 17:13:15.979220    5077 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 17:13:16.003867    5077 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 17:13:16.003892    5077 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 17:13:16.021844    5077 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 17:13:16.021869    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 17:13:16.036390    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 17:13:16.080807    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:16.291108    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:16.291278    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:16.524622    5077 addons.go:479] Verifying addon gcp-auth=true in "addons-164474"
	I1018 17:13:16.527651    5077 out.go:179] * Verifying gcp-auth addon...
	I1018 17:13:16.531235    5077 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 17:13:16.538792    5077 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 17:13:16.538820    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:16.639237    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:16.717488    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:13:16.790837    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:16.791120    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:17.034118    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:17.079898    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:17.290173    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:17.290361    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:17.451948    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:17.534984    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 17:13:17.549488    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:17.549516    5077 retry.go:31] will retry after 485.971313ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:17.579501    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:17.789437    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:17.791159    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:18.034919    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:18.036103    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:13:18.079581    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:18.295644    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:18.296153    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:18.534729    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:18.580134    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:18.789911    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:18.792707    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:18.822730    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:18.822762    5077 retry.go:31] will retry after 894.899686ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:19.034981    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:19.079793    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:19.290752    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:19.290826    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:19.534543    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:19.579557    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:19.717862    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:13:19.794973    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:19.795539    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:19.952077    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:20.035555    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:20.079589    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:20.289664    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:20.290979    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:20.537872    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 17:13:20.545955    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:20.545985    5077 retry.go:31] will retry after 1.970203596s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:20.579713    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:20.789886    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:20.790631    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:21.034439    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:21.079120    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:21.289128    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:21.290169    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:21.535073    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:21.578949    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:21.790057    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:21.790354    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:22.034281    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:22.078997    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:22.289175    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:22.290572    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:22.451456    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:22.516714    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:13:22.536073    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:22.578694    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:22.791156    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:22.792519    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:23.037026    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:23.079640    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:23.289456    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:23.290514    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:23.317414    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:23.317444    5077 retry.go:31] will retry after 1.464282054s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:23.534977    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:23.579707    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:23.790081    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:23.790950    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:24.035432    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:24.079309    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:24.289354    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:24.290071    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:24.451586    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:24.534568    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:24.579311    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:24.782765    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:13:24.794137    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:24.794444    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:25.034629    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:25.079891    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:25.290411    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:25.291867    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:25.557969    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:25.581599    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 17:13:25.639117    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:25.639148    5077 retry.go:31] will retry after 4.590765672s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:25.788928    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:25.789678    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:26.034518    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:26.079377    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:26.290729    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:26.291120    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:26.452108    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:26.534868    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:26.579807    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:26.789021    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:26.789926    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:27.035264    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:27.082034    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:27.289118    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:27.289906    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:27.534298    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:27.579053    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:27.790040    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:27.790357    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:28.034635    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:28.135119    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:28.290590    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:28.291045    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:28.534737    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:28.579550    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:28.789175    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:28.790821    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:28.951164    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:29.034871    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:29.079640    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:29.289953    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:29.290091    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:29.535123    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:29.579978    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:29.789654    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:29.789774    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:30.035642    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:30.080127    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:30.230292    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:13:30.290740    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:30.290808    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:30.535056    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:30.580131    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:30.791595    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:30.792453    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:30.954319    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:31.034752    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 17:13:31.040212    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:31.040310    5077 retry.go:31] will retry after 7.624700004s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:31.079285    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:31.289433    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:31.290534    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:31.534626    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:31.579462    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:31.790566    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:31.790674    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:32.034612    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:32.082985    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:32.289661    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:32.289904    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:32.534242    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:32.579110    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:32.788828    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:32.790137    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:33.035048    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:33.079952    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:33.289014    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:33.290380    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:33.451341    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:33.534341    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:33.579235    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:33.789742    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:33.789858    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:34.034972    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:34.079606    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:34.290441    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:34.290616    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:34.534314    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:34.579390    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:34.789335    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:34.789927    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:35.034733    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:35.079474    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:35.292466    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:35.292873    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:35.451624    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:35.534282    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:35.579159    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:35.789461    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:35.790669    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:36.034798    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:36.079737    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:36.290196    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:36.290857    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:36.534296    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:36.579265    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:36.789533    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:36.790928    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:37.034561    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:37.079633    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:37.290191    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:37.290423    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:37.451761    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:37.534566    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:37.579135    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:37.788693    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:37.789783    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:38.035049    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:38.080024    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:38.289525    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:38.289665    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:38.534964    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:38.579852    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:38.666073    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:13:38.790345    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:38.792397    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:39.035036    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:39.079701    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:39.291766    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:39.292250    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:39.452186    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	W1018 17:13:39.467858    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:39.467889    5077 retry.go:31] will retry after 13.863401369s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:39.534579    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:39.579509    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:39.790631    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:39.790835    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:40.035701    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:40.079862    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:40.290551    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:40.290903    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:40.534480    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:40.579484    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:40.790038    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:40.790351    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:41.035073    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:41.079549    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:41.290238    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:41.290435    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:41.533961    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:41.579678    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:41.789498    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:41.790729    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:41.951680    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:42.035475    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:42.080477    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:42.290247    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:42.290398    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:42.534686    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:42.579491    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:42.789725    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:42.790737    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:43.034356    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:43.079349    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:43.289151    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:43.290190    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:43.533966    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:43.579662    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:43.790044    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:43.790245    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:44.035086    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:44.079986    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:44.288875    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:44.289950    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:44.451725    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:44.534467    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:44.579569    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:44.790735    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:44.790987    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:45.042951    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:45.083517    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:45.290527    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:45.290810    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:45.534567    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:45.579650    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:45.789540    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:45.790714    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:46.034696    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:46.079791    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:46.289193    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:46.291163    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:46.534660    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:46.579923    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:46.789275    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:46.790434    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:46.951327    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:47.035069    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:47.079980    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:47.289140    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:47.289621    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:47.534879    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:47.579835    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:47.810454    5077 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 17:13:47.810539    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:47.813645    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:47.973905    5077 node_ready.go:49] node "addons-164474" is "Ready"
	I1018 17:13:47.973937    5077 node_ready.go:38] duration metric: took 39.52576049s for node "addons-164474" to be "Ready" ...
	I1018 17:13:47.973950    5077 api_server.go:52] waiting for apiserver process to appear ...
	I1018 17:13:47.974008    5077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:13:47.992493    5077 api_server.go:72] duration metric: took 41.609038263s to wait for apiserver process to appear ...
	I1018 17:13:47.992514    5077 api_server.go:88] waiting for apiserver healthz status ...
	I1018 17:13:47.992533    5077 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 17:13:48.061263    5077 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 17:13:48.064520    5077 api_server.go:141] control plane version: v1.34.1
	I1018 17:13:48.064566    5077 api_server.go:131] duration metric: took 72.044897ms to wait for apiserver health ...
	I1018 17:13:48.064575    5077 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 17:13:48.073352    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:48.105818    5077 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 17:13:48.105842    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:48.108261    5077 system_pods.go:59] 19 kube-system pods found
	I1018 17:13:48.108302    5077 system_pods.go:61] "coredns-66bc5c9577-467ch" [b89aeb20-752c-43e2-b8bb-580999350080] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:13:48.108312    5077 system_pods.go:61] "csi-hostpath-attacher-0" [a96e6e91-3fd7-4a35-96b6-dc9078bc0615] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 17:13:48.108318    5077 system_pods.go:61] "csi-hostpath-resizer-0" [2fbebb55-78e5-4594-9979-227aec6c93eb] Pending
	I1018 17:13:48.108323    5077 system_pods.go:61] "csi-hostpathplugin-9l87p" [43074c41-36f1-48ed-85da-9f4166509d86] Pending
	I1018 17:13:48.108327    5077 system_pods.go:61] "etcd-addons-164474" [0c1d9b95-efb3-41c6-ad13-ecdc5e2aed23] Running
	I1018 17:13:48.108331    5077 system_pods.go:61] "kindnet-hsvb9" [70417575-3af4-4899-aaf4-eb73d8dc18fc] Running
	I1018 17:13:48.108338    5077 system_pods.go:61] "kube-apiserver-addons-164474" [d7998556-38e2-44d9-b248-e8168e01f0b7] Running
	I1018 17:13:48.108343    5077 system_pods.go:61] "kube-controller-manager-addons-164474" [66eae987-1a94-433b-8387-4fe4b8f54f6d] Running
	I1018 17:13:48.108356    5077 system_pods.go:61] "kube-ingress-dns-minikube" [587c439a-5adf-4c7a-b2cf-37b34fbc7fe4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 17:13:48.108364    5077 system_pods.go:61] "kube-proxy-ccs4c" [07b2f86d-366e-47c9-8aad-6b7b51f33565] Running
	I1018 17:13:48.108370    5077 system_pods.go:61] "kube-scheduler-addons-164474" [e288ffed-0c2c-4993-8d34-daa6250e509d] Running
	I1018 17:13:48.108384    5077 system_pods.go:61] "metrics-server-85b7d694d7-8dnml" [7cf655d5-48cd-488d-9cbd-b19f09925a22] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 17:13:48.108389    5077 system_pods.go:61] "nvidia-device-plugin-daemonset-w6sqz" [ef275008-60c3-4bde-a747-35f70a06cb02] Pending
	I1018 17:13:48.108402    5077 system_pods.go:61] "registry-6b586f9694-fwkz8" [d12f3e97-a0a1-4ac6-aa88-1e38730ecf05] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 17:13:48.108408    5077 system_pods.go:61] "registry-creds-764b6fb674-k267j" [66a2f897-d4c3-4ebf-a15a-51183d31deaa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 17:13:48.108418    5077 system_pods.go:61] "registry-proxy-6x6dm" [7cc511f1-c2a0-4516-85b6-eee6876bc7ae] Pending
	I1018 17:13:48.108423    5077 system_pods.go:61] "snapshot-controller-7d9fbc56b8-f8bm6" [96f5066a-4702-41dc-b553-575c361e1501] Pending
	I1018 17:13:48.108428    5077 system_pods.go:61] "snapshot-controller-7d9fbc56b8-gnvj9" [a5334265-1b53-49d6-95e8-60a89ea17d73] Pending
	I1018 17:13:48.108432    5077 system_pods.go:61] "storage-provisioner" [600b4ef5-41ba-4562-8384-bcfb6ce65634] Pending
	I1018 17:13:48.108437    5077 system_pods.go:74] duration metric: took 43.85736ms to wait for pod list to return data ...
	I1018 17:13:48.108448    5077 default_sa.go:34] waiting for default service account to be created ...
	I1018 17:13:48.117539    5077 default_sa.go:45] found service account: "default"
	I1018 17:13:48.117566    5077 default_sa.go:55] duration metric: took 9.112193ms for default service account to be created ...
	I1018 17:13:48.117575    5077 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 17:13:48.142095    5077 system_pods.go:86] 19 kube-system pods found
	I1018 17:13:48.142134    5077 system_pods.go:89] "coredns-66bc5c9577-467ch" [b89aeb20-752c-43e2-b8bb-580999350080] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:13:48.142144    5077 system_pods.go:89] "csi-hostpath-attacher-0" [a96e6e91-3fd7-4a35-96b6-dc9078bc0615] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 17:13:48.142152    5077 system_pods.go:89] "csi-hostpath-resizer-0" [2fbebb55-78e5-4594-9979-227aec6c93eb] Pending
	I1018 17:13:48.142157    5077 system_pods.go:89] "csi-hostpathplugin-9l87p" [43074c41-36f1-48ed-85da-9f4166509d86] Pending
	I1018 17:13:48.142161    5077 system_pods.go:89] "etcd-addons-164474" [0c1d9b95-efb3-41c6-ad13-ecdc5e2aed23] Running
	I1018 17:13:48.142165    5077 system_pods.go:89] "kindnet-hsvb9" [70417575-3af4-4899-aaf4-eb73d8dc18fc] Running
	I1018 17:13:48.142170    5077 system_pods.go:89] "kube-apiserver-addons-164474" [d7998556-38e2-44d9-b248-e8168e01f0b7] Running
	I1018 17:13:48.142174    5077 system_pods.go:89] "kube-controller-manager-addons-164474" [66eae987-1a94-433b-8387-4fe4b8f54f6d] Running
	I1018 17:13:48.142184    5077 system_pods.go:89] "kube-ingress-dns-minikube" [587c439a-5adf-4c7a-b2cf-37b34fbc7fe4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 17:13:48.142189    5077 system_pods.go:89] "kube-proxy-ccs4c" [07b2f86d-366e-47c9-8aad-6b7b51f33565] Running
	I1018 17:13:48.142196    5077 system_pods.go:89] "kube-scheduler-addons-164474" [e288ffed-0c2c-4993-8d34-daa6250e509d] Running
	I1018 17:13:48.142204    5077 system_pods.go:89] "metrics-server-85b7d694d7-8dnml" [7cf655d5-48cd-488d-9cbd-b19f09925a22] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 17:13:48.142213    5077 system_pods.go:89] "nvidia-device-plugin-daemonset-w6sqz" [ef275008-60c3-4bde-a747-35f70a06cb02] Pending
	I1018 17:13:48.142219    5077 system_pods.go:89] "registry-6b586f9694-fwkz8" [d12f3e97-a0a1-4ac6-aa88-1e38730ecf05] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 17:13:48.142225    5077 system_pods.go:89] "registry-creds-764b6fb674-k267j" [66a2f897-d4c3-4ebf-a15a-51183d31deaa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 17:13:48.142233    5077 system_pods.go:89] "registry-proxy-6x6dm" [7cc511f1-c2a0-4516-85b6-eee6876bc7ae] Pending
	I1018 17:13:48.142237    5077 system_pods.go:89] "snapshot-controller-7d9fbc56b8-f8bm6" [96f5066a-4702-41dc-b553-575c361e1501] Pending
	I1018 17:13:48.142241    5077 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gnvj9" [a5334265-1b53-49d6-95e8-60a89ea17d73] Pending
	I1018 17:13:48.142252    5077 system_pods.go:89] "storage-provisioner" [600b4ef5-41ba-4562-8384-bcfb6ce65634] Pending
	I1018 17:13:48.142265    5077 retry.go:31] will retry after 201.706754ms: missing components: kube-dns
	I1018 17:13:48.300880    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:48.305091    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:48.391459    5077 system_pods.go:86] 19 kube-system pods found
	I1018 17:13:48.391491    5077 system_pods.go:89] "coredns-66bc5c9577-467ch" [b89aeb20-752c-43e2-b8bb-580999350080] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:13:48.391499    5077 system_pods.go:89] "csi-hostpath-attacher-0" [a96e6e91-3fd7-4a35-96b6-dc9078bc0615] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 17:13:48.391506    5077 system_pods.go:89] "csi-hostpath-resizer-0" [2fbebb55-78e5-4594-9979-227aec6c93eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 17:13:48.391513    5077 system_pods.go:89] "csi-hostpathplugin-9l87p" [43074c41-36f1-48ed-85da-9f4166509d86] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 17:13:48.391518    5077 system_pods.go:89] "etcd-addons-164474" [0c1d9b95-efb3-41c6-ad13-ecdc5e2aed23] Running
	I1018 17:13:48.391528    5077 system_pods.go:89] "kindnet-hsvb9" [70417575-3af4-4899-aaf4-eb73d8dc18fc] Running
	I1018 17:13:48.391534    5077 system_pods.go:89] "kube-apiserver-addons-164474" [d7998556-38e2-44d9-b248-e8168e01f0b7] Running
	I1018 17:13:48.391547    5077 system_pods.go:89] "kube-controller-manager-addons-164474" [66eae987-1a94-433b-8387-4fe4b8f54f6d] Running
	I1018 17:13:48.391555    5077 system_pods.go:89] "kube-ingress-dns-minikube" [587c439a-5adf-4c7a-b2cf-37b34fbc7fe4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 17:13:48.391560    5077 system_pods.go:89] "kube-proxy-ccs4c" [07b2f86d-366e-47c9-8aad-6b7b51f33565] Running
	I1018 17:13:48.391564    5077 system_pods.go:89] "kube-scheduler-addons-164474" [e288ffed-0c2c-4993-8d34-daa6250e509d] Running
	I1018 17:13:48.391573    5077 system_pods.go:89] "metrics-server-85b7d694d7-8dnml" [7cf655d5-48cd-488d-9cbd-b19f09925a22] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 17:13:48.391580    5077 system_pods.go:89] "nvidia-device-plugin-daemonset-w6sqz" [ef275008-60c3-4bde-a747-35f70a06cb02] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 17:13:48.391588    5077 system_pods.go:89] "registry-6b586f9694-fwkz8" [d12f3e97-a0a1-4ac6-aa88-1e38730ecf05] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 17:13:48.391594    5077 system_pods.go:89] "registry-creds-764b6fb674-k267j" [66a2f897-d4c3-4ebf-a15a-51183d31deaa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 17:13:48.391606    5077 system_pods.go:89] "registry-proxy-6x6dm" [7cc511f1-c2a0-4516-85b6-eee6876bc7ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 17:13:48.391614    5077 system_pods.go:89] "snapshot-controller-7d9fbc56b8-f8bm6" [96f5066a-4702-41dc-b553-575c361e1501] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 17:13:48.391619    5077 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gnvj9" [a5334265-1b53-49d6-95e8-60a89ea17d73] Pending
	I1018 17:13:48.391628    5077 system_pods.go:89] "storage-provisioner" [600b4ef5-41ba-4562-8384-bcfb6ce65634] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 17:13:48.391642    5077 retry.go:31] will retry after 266.916872ms: missing components: kube-dns
	I1018 17:13:48.566485    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:48.671711    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:48.681165    5077 system_pods.go:86] 19 kube-system pods found
	I1018 17:13:48.681200    5077 system_pods.go:89] "coredns-66bc5c9577-467ch" [b89aeb20-752c-43e2-b8bb-580999350080] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:13:48.681209    5077 system_pods.go:89] "csi-hostpath-attacher-0" [a96e6e91-3fd7-4a35-96b6-dc9078bc0615] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 17:13:48.681217    5077 system_pods.go:89] "csi-hostpath-resizer-0" [2fbebb55-78e5-4594-9979-227aec6c93eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 17:13:48.681223    5077 system_pods.go:89] "csi-hostpathplugin-9l87p" [43074c41-36f1-48ed-85da-9f4166509d86] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 17:13:48.681228    5077 system_pods.go:89] "etcd-addons-164474" [0c1d9b95-efb3-41c6-ad13-ecdc5e2aed23] Running
	I1018 17:13:48.681233    5077 system_pods.go:89] "kindnet-hsvb9" [70417575-3af4-4899-aaf4-eb73d8dc18fc] Running
	I1018 17:13:48.681237    5077 system_pods.go:89] "kube-apiserver-addons-164474" [d7998556-38e2-44d9-b248-e8168e01f0b7] Running
	I1018 17:13:48.681243    5077 system_pods.go:89] "kube-controller-manager-addons-164474" [66eae987-1a94-433b-8387-4fe4b8f54f6d] Running
	I1018 17:13:48.681251    5077 system_pods.go:89] "kube-ingress-dns-minikube" [587c439a-5adf-4c7a-b2cf-37b34fbc7fe4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 17:13:48.681258    5077 system_pods.go:89] "kube-proxy-ccs4c" [07b2f86d-366e-47c9-8aad-6b7b51f33565] Running
	I1018 17:13:48.681263    5077 system_pods.go:89] "kube-scheduler-addons-164474" [e288ffed-0c2c-4993-8d34-daa6250e509d] Running
	I1018 17:13:48.681269    5077 system_pods.go:89] "metrics-server-85b7d694d7-8dnml" [7cf655d5-48cd-488d-9cbd-b19f09925a22] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 17:13:48.681288    5077 system_pods.go:89] "nvidia-device-plugin-daemonset-w6sqz" [ef275008-60c3-4bde-a747-35f70a06cb02] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 17:13:48.681301    5077 system_pods.go:89] "registry-6b586f9694-fwkz8" [d12f3e97-a0a1-4ac6-aa88-1e38730ecf05] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 17:13:48.681307    5077 system_pods.go:89] "registry-creds-764b6fb674-k267j" [66a2f897-d4c3-4ebf-a15a-51183d31deaa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 17:13:48.681318    5077 system_pods.go:89] "registry-proxy-6x6dm" [7cc511f1-c2a0-4516-85b6-eee6876bc7ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 17:13:48.681326    5077 system_pods.go:89] "snapshot-controller-7d9fbc56b8-f8bm6" [96f5066a-4702-41dc-b553-575c361e1501] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 17:13:48.681333    5077 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gnvj9" [a5334265-1b53-49d6-95e8-60a89ea17d73] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 17:13:48.681341    5077 system_pods.go:89] "storage-provisioner" [600b4ef5-41ba-4562-8384-bcfb6ce65634] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 17:13:48.681355    5077 retry.go:31] will retry after 436.900491ms: missing components: kube-dns
	I1018 17:13:48.794455    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:48.794761    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:49.035150    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:49.080093    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:49.183226    5077 system_pods.go:86] 19 kube-system pods found
	I1018 17:13:49.183266    5077 system_pods.go:89] "coredns-66bc5c9577-467ch" [b89aeb20-752c-43e2-b8bb-580999350080] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:13:49.183275    5077 system_pods.go:89] "csi-hostpath-attacher-0" [a96e6e91-3fd7-4a35-96b6-dc9078bc0615] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 17:13:49.183283    5077 system_pods.go:89] "csi-hostpath-resizer-0" [2fbebb55-78e5-4594-9979-227aec6c93eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 17:13:49.183291    5077 system_pods.go:89] "csi-hostpathplugin-9l87p" [43074c41-36f1-48ed-85da-9f4166509d86] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 17:13:49.183299    5077 system_pods.go:89] "etcd-addons-164474" [0c1d9b95-efb3-41c6-ad13-ecdc5e2aed23] Running
	I1018 17:13:49.183320    5077 system_pods.go:89] "kindnet-hsvb9" [70417575-3af4-4899-aaf4-eb73d8dc18fc] Running
	I1018 17:13:49.183329    5077 system_pods.go:89] "kube-apiserver-addons-164474" [d7998556-38e2-44d9-b248-e8168e01f0b7] Running
	I1018 17:13:49.183334    5077 system_pods.go:89] "kube-controller-manager-addons-164474" [66eae987-1a94-433b-8387-4fe4b8f54f6d] Running
	I1018 17:13:49.183341    5077 system_pods.go:89] "kube-ingress-dns-minikube" [587c439a-5adf-4c7a-b2cf-37b34fbc7fe4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 17:13:49.183349    5077 system_pods.go:89] "kube-proxy-ccs4c" [07b2f86d-366e-47c9-8aad-6b7b51f33565] Running
	I1018 17:13:49.183356    5077 system_pods.go:89] "kube-scheduler-addons-164474" [e288ffed-0c2c-4993-8d34-daa6250e509d] Running
	I1018 17:13:49.183362    5077 system_pods.go:89] "metrics-server-85b7d694d7-8dnml" [7cf655d5-48cd-488d-9cbd-b19f09925a22] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 17:13:49.183372    5077 system_pods.go:89] "nvidia-device-plugin-daemonset-w6sqz" [ef275008-60c3-4bde-a747-35f70a06cb02] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 17:13:49.183380    5077 system_pods.go:89] "registry-6b586f9694-fwkz8" [d12f3e97-a0a1-4ac6-aa88-1e38730ecf05] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 17:13:49.183386    5077 system_pods.go:89] "registry-creds-764b6fb674-k267j" [66a2f897-d4c3-4ebf-a15a-51183d31deaa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 17:13:49.183396    5077 system_pods.go:89] "registry-proxy-6x6dm" [7cc511f1-c2a0-4516-85b6-eee6876bc7ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 17:13:49.183404    5077 system_pods.go:89] "snapshot-controller-7d9fbc56b8-f8bm6" [96f5066a-4702-41dc-b553-575c361e1501] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 17:13:49.183413    5077 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gnvj9" [a5334265-1b53-49d6-95e8-60a89ea17d73] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 17:13:49.183419    5077 system_pods.go:89] "storage-provisioner" [600b4ef5-41ba-4562-8384-bcfb6ce65634] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 17:13:49.183437    5077 retry.go:31] will retry after 559.053592ms: missing components: kube-dns
	I1018 17:13:49.290196    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:49.292148    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:49.551290    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:49.648052    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:49.749508    5077 system_pods.go:86] 19 kube-system pods found
	I1018 17:13:49.749591    5077 system_pods.go:89] "coredns-66bc5c9577-467ch" [b89aeb20-752c-43e2-b8bb-580999350080] Running
	I1018 17:13:49.749607    5077 system_pods.go:89] "csi-hostpath-attacher-0" [a96e6e91-3fd7-4a35-96b6-dc9078bc0615] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 17:13:49.749617    5077 system_pods.go:89] "csi-hostpath-resizer-0" [2fbebb55-78e5-4594-9979-227aec6c93eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 17:13:49.749628    5077 system_pods.go:89] "csi-hostpathplugin-9l87p" [43074c41-36f1-48ed-85da-9f4166509d86] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 17:13:49.749633    5077 system_pods.go:89] "etcd-addons-164474" [0c1d9b95-efb3-41c6-ad13-ecdc5e2aed23] Running
	I1018 17:13:49.749638    5077 system_pods.go:89] "kindnet-hsvb9" [70417575-3af4-4899-aaf4-eb73d8dc18fc] Running
	I1018 17:13:49.749647    5077 system_pods.go:89] "kube-apiserver-addons-164474" [d7998556-38e2-44d9-b248-e8168e01f0b7] Running
	I1018 17:13:49.749652    5077 system_pods.go:89] "kube-controller-manager-addons-164474" [66eae987-1a94-433b-8387-4fe4b8f54f6d] Running
	I1018 17:13:49.749664    5077 system_pods.go:89] "kube-ingress-dns-minikube" [587c439a-5adf-4c7a-b2cf-37b34fbc7fe4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 17:13:49.749669    5077 system_pods.go:89] "kube-proxy-ccs4c" [07b2f86d-366e-47c9-8aad-6b7b51f33565] Running
	I1018 17:13:49.749674    5077 system_pods.go:89] "kube-scheduler-addons-164474" [e288ffed-0c2c-4993-8d34-daa6250e509d] Running
	I1018 17:13:49.749682    5077 system_pods.go:89] "metrics-server-85b7d694d7-8dnml" [7cf655d5-48cd-488d-9cbd-b19f09925a22] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 17:13:49.749695    5077 system_pods.go:89] "nvidia-device-plugin-daemonset-w6sqz" [ef275008-60c3-4bde-a747-35f70a06cb02] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 17:13:49.749705    5077 system_pods.go:89] "registry-6b586f9694-fwkz8" [d12f3e97-a0a1-4ac6-aa88-1e38730ecf05] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 17:13:49.749713    5077 system_pods.go:89] "registry-creds-764b6fb674-k267j" [66a2f897-d4c3-4ebf-a15a-51183d31deaa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 17:13:49.749719    5077 system_pods.go:89] "registry-proxy-6x6dm" [7cc511f1-c2a0-4516-85b6-eee6876bc7ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 17:13:49.749729    5077 system_pods.go:89] "snapshot-controller-7d9fbc56b8-f8bm6" [96f5066a-4702-41dc-b553-575c361e1501] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 17:13:49.749738    5077 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gnvj9" [a5334265-1b53-49d6-95e8-60a89ea17d73] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 17:13:49.749742    5077 system_pods.go:89] "storage-provisioner" [600b4ef5-41ba-4562-8384-bcfb6ce65634] Running
	I1018 17:13:49.749757    5077 system_pods.go:126] duration metric: took 1.63217472s to wait for k8s-apps to be running ...
	I1018 17:13:49.749765    5077 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 17:13:49.749833    5077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 17:13:49.763695    5077 system_svc.go:56] duration metric: took 13.920925ms WaitForService to wait for kubelet
	I1018 17:13:49.763723    5077 kubeadm.go:586] duration metric: took 43.380273046s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:13:49.763743    5077 node_conditions.go:102] verifying NodePressure condition ...
	I1018 17:13:49.766864    5077 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:13:49.766898    5077 node_conditions.go:123] node cpu capacity is 2
	I1018 17:13:49.766912    5077 node_conditions.go:105] duration metric: took 3.163934ms to run NodePressure ...
	I1018 17:13:49.766925    5077 start.go:241] waiting for startup goroutines ...
	I1018 17:13:49.848482    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:49.848656    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:50.035523    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:50.080088    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:50.288929    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:50.290243    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:50.536098    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:50.637083    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:50.790782    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:50.791549    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:51.034914    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:51.080606    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:51.291138    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:51.292273    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:51.534695    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:51.580520    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:51.791852    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:51.792151    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:52.037534    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:52.080776    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:52.301228    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:52.304133    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:52.552765    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:52.606983    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:52.796353    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:52.796558    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:53.037809    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:53.083482    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:53.292273    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:53.292404    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:53.331839    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:13:53.534732    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:53.581227    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:53.794013    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:53.794332    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:54.038219    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:54.082525    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:54.292513    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:54.292897    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:54.535091    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:54.597005    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:54.791689    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:54.791821    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:54.931060    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.59918827s)
	W1018 17:13:54.931092    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:54.931110    5077 retry.go:31] will retry after 7.871158109s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:55.043010    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:55.080383    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:55.290166    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:55.290866    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:55.535259    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:55.636879    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:55.790734    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:55.790932    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:56.035017    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:56.080729    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:56.290610    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:56.291978    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:56.535146    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:56.579954    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:56.795252    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:56.795261    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:57.034911    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:57.080546    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:57.292044    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:57.292499    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:57.535099    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:57.581030    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:57.792225    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:57.792717    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:58.035716    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:58.080016    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:58.289954    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:58.290356    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:58.534032    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:58.582557    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:58.792328    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:58.792451    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:59.034416    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:59.079708    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:59.290418    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:59.290580    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:59.534833    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:59.580129    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:59.793303    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:59.793971    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:00.039345    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:00.092404    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:00.302971    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:00.303525    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:00.535647    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:00.581539    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:00.793662    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:00.794128    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:01.034990    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:01.080250    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:01.291592    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:01.291933    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:01.535263    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:01.579851    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:01.791302    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:01.792231    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:02.034377    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:02.079481    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:02.291480    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:02.291756    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:02.535325    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:02.580360    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:02.792118    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:02.792415    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:02.802690    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:14:03.034701    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:03.080431    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:03.291771    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:03.291944    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:03.535135    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:03.606539    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:03.791351    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:03.791608    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:03.869617    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.066879323s)
	W1018 17:14:03.869657    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:14:03.869704    5077 retry.go:31] will retry after 30.307679297s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:14:04.034262    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:04.080318    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:04.289665    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:04.290834    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:04.534064    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:04.579983    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:04.789497    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:04.790341    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:05.034512    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:05.079535    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:05.291183    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:05.292431    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:05.537310    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:05.580514    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:05.790359    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:05.790468    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:06.035296    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:06.080064    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:06.288785    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:06.290869    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:06.534735    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:06.579929    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:06.791220    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:06.792453    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:07.034756    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:07.080515    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:07.291262    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:07.291567    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:07.534768    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:07.580466    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:07.792012    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:07.792357    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:08.035334    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:08.080030    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:08.290040    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:08.291686    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:08.534713    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:08.580214    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:08.790736    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:08.791113    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:09.034748    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:09.080578    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:09.291857    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:09.293403    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:09.534587    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:09.579887    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:09.791961    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:09.792203    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:10.035315    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:10.082242    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:10.289954    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:10.290233    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:10.534727    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:10.582485    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:10.790125    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:10.791229    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:11.034551    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:11.080044    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:11.291275    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:11.291695    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:11.535011    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:11.579956    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:11.789481    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:11.790550    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:12.035011    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:12.080870    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:12.290428    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:12.290601    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:12.535053    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:12.579975    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:12.792581    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:12.793025    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:13.035696    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:13.080758    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:13.291547    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:13.291800    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:13.534636    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:13.579725    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:13.791065    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:13.790916    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:14.034391    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:14.080547    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:14.292494    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:14.293025    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:14.534350    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:14.580204    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:14.792058    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:14.792976    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:15.035559    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:15.081218    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:15.292290    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:15.292683    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:15.539811    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:15.582618    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:15.790386    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:15.790503    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:16.034145    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:16.078921    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:16.289172    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:16.289782    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:16.535236    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:16.579892    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:16.789239    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:16.790407    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:17.035041    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:17.135357    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:17.289518    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:17.290319    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:17.534527    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:17.579714    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:17.791391    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:17.791959    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:18.034175    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:18.079597    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:18.290924    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:18.292014    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:18.534574    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:18.579909    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:18.790921    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:18.791384    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:19.034940    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:19.080076    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:19.289973    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:19.290122    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:19.535169    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:19.579730    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:19.790908    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:19.791130    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:20.035331    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:20.080171    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:20.291025    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:20.291580    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:20.535133    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:20.587796    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:20.790859    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:20.790924    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:21.035299    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:21.079661    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:21.290275    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:21.290364    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:21.534619    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:21.580436    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:21.792212    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:21.792466    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:22.035081    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:22.080990    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:22.290688    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:22.291098    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:22.534601    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:22.580259    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:22.791566    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:22.792174    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:23.035624    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:23.080879    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:23.291429    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:23.291995    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:23.535530    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:23.580041    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:23.790963    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:23.791005    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:24.034785    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:24.080182    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:24.289549    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:24.291509    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:24.534429    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:24.579674    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:24.791698    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:24.792087    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:25.035966    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:25.081041    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:25.291816    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:25.292213    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:25.534644    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:25.581262    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:25.790848    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:25.791221    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:26.034038    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:26.080481    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:26.289955    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:26.290796    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:26.535339    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:26.580372    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:26.789705    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:26.790763    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:27.035180    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:27.079781    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:27.290675    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:27.291345    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:27.535540    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:27.581023    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:27.791315    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:27.792461    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:28.036074    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:28.080211    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:28.289359    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:28.291602    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:28.534688    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:28.579974    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:28.790452    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:28.790839    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:29.035156    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:29.080179    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:29.290352    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:29.290544    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:29.534020    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:29.580643    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:29.791651    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:29.792133    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:30.050732    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:30.080384    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:30.291060    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:30.291422    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:30.536509    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:30.579566    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:30.790541    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:30.792058    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:31.035412    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:31.137176    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:31.290182    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:31.290371    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:31.534553    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:31.580372    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:31.799864    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:31.800170    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:32.037155    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:32.080451    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:32.290096    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:32.291652    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:32.534805    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:32.580540    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:32.791640    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:32.791760    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:33.035124    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:33.079786    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:33.291255    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:33.291593    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:33.534804    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:33.580446    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:33.792277    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:33.792648    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:34.035187    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:34.079474    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:34.177858    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:14:34.290135    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:34.291314    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:34.534574    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:34.580252    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:34.795847    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:34.796141    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:35.035000    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 17:14:35.066747    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 17:14:35.066900    5077 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 17:14:35.081034    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:35.290468    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:35.290642    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:35.534831    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:35.579951    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:35.799589    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:35.799778    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:36.035725    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:36.080869    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:36.289234    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:36.290819    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:36.535089    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:36.580039    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:36.791434    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:36.791593    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:37.050147    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:37.079695    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:37.291068    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:37.291228    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:37.534790    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:37.581095    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:37.790879    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:37.791245    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:38.036155    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:38.079795    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:38.290430    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:38.290571    5077 kapi.go:107] duration metric: took 1m25.504550646s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 17:14:38.535093    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:38.579428    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:38.790733    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:39.036204    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:39.141502    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:39.290511    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:39.534842    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:39.580461    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:39.791575    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:40.040132    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:40.079438    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:40.290916    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:40.535294    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:40.579620    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:40.790675    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:41.035065    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:41.136808    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:41.291232    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:41.535163    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:41.580405    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:41.791439    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:42.034863    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:42.081265    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:42.292897    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:42.535124    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:42.592629    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:42.791320    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:43.034861    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:43.080184    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:43.290101    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:43.535367    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:43.579233    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:43.790827    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:44.035083    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:44.080852    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:44.291276    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:44.534767    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:44.580080    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:44.790842    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:45.043186    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:45.082189    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:45.292803    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:45.535099    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:45.579193    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:45.790849    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:46.035240    5077 kapi.go:107] duration metric: took 1m29.504005172s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 17:14:46.038418    5077 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-164474 cluster.
	I1018 17:14:46.041324    5077 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 17:14:46.044129    5077 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 17:14:46.079519    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:46.291312    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:46.580204    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:46.790563    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:47.080927    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:47.291758    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:47.579886    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:47.791394    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:48.079825    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:48.290420    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:48.580149    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:48.790514    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:49.080479    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:49.291227    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:49.580238    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:49.790329    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:50.082058    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:50.290595    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:50.580091    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:50.790570    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:51.080345    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:51.290742    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:51.580714    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:51.791206    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:52.079983    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:52.290032    5077 kapi.go:107] duration metric: took 1m39.50294224s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 17:14:52.579634    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:53.131207    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:53.580073    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:54.079750    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:54.579926    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:55.087833    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:55.579664    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:56.080005    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:56.579992    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:57.083663    5077 kapi.go:107] duration metric: took 1m44.007564352s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 17:14:57.086614    5077 out.go:179] * Enabled addons: cloud-spanner, storage-provisioner, registry-creds, nvidia-device-plugin, amd-gpu-device-plugin, default-storageclass, storage-provisioner-rancher, metrics-server, ingress-dns, yakd, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1018 17:14:57.089327    5077 addons.go:514] duration metric: took 1m50.705604027s for enable addons: enabled=[cloud-spanner storage-provisioner registry-creds nvidia-device-plugin amd-gpu-device-plugin default-storageclass storage-provisioner-rancher metrics-server ingress-dns yakd volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1018 17:14:57.089393    5077 start.go:246] waiting for cluster config update ...
	I1018 17:14:57.089418    5077 start.go:255] writing updated cluster config ...
	I1018 17:14:57.090664    5077 ssh_runner.go:195] Run: rm -f paused
	I1018 17:14:57.095142    5077 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 17:14:57.098531    5077 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-467ch" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:14:57.106219    5077 pod_ready.go:94] pod "coredns-66bc5c9577-467ch" is "Ready"
	I1018 17:14:57.106250    5077 pod_ready.go:86] duration metric: took 7.699234ms for pod "coredns-66bc5c9577-467ch" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:14:57.109072    5077 pod_ready.go:83] waiting for pod "etcd-addons-164474" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:14:57.114011    5077 pod_ready.go:94] pod "etcd-addons-164474" is "Ready"
	I1018 17:14:57.114033    5077 pod_ready.go:86] duration metric: took 4.934896ms for pod "etcd-addons-164474" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:14:57.116459    5077 pod_ready.go:83] waiting for pod "kube-apiserver-addons-164474" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:14:57.121646    5077 pod_ready.go:94] pod "kube-apiserver-addons-164474" is "Ready"
	I1018 17:14:57.121718    5077 pod_ready.go:86] duration metric: took 5.239516ms for pod "kube-apiserver-addons-164474" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:14:57.124376    5077 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-164474" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:14:57.499640    5077 pod_ready.go:94] pod "kube-controller-manager-addons-164474" is "Ready"
	I1018 17:14:57.499668    5077 pod_ready.go:86] duration metric: took 375.270436ms for pod "kube-controller-manager-addons-164474" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:14:57.698756    5077 pod_ready.go:83] waiting for pod "kube-proxy-ccs4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:14:58.099038    5077 pod_ready.go:94] pod "kube-proxy-ccs4c" is "Ready"
	I1018 17:14:58.099119    5077 pod_ready.go:86] duration metric: took 400.334533ms for pod "kube-proxy-ccs4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:14:58.299424    5077 pod_ready.go:83] waiting for pod "kube-scheduler-addons-164474" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:14:58.699345    5077 pod_ready.go:94] pod "kube-scheduler-addons-164474" is "Ready"
	I1018 17:14:58.699375    5077 pod_ready.go:86] duration metric: took 399.921506ms for pod "kube-scheduler-addons-164474" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:14:58.699388    5077 pod_ready.go:40] duration metric: took 1.604217812s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 17:14:59.099655    5077 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 17:14:59.102994    5077 out.go:179] * Done! kubectl is now configured to use "addons-164474" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.089123529Z" level=info msg="Running pod sandbox: kube-system/registry-creds-764b6fb674-k267j/POD" id=b21b1e3d-798b-4ee4-b0a8-0777999ce0ff name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.089204753Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.101484541Z" level=info msg="Got pod network &{Name:registry-creds-764b6fb674-k267j Namespace:kube-system ID:442e4d5e312fb90ca7d9f91f62d3a0e23bbe26e7b2651f780ffa328b4716ea59 UID:66a2f897-d4c3-4ebf-a15a-51183d31deaa NetNS:/var/run/netns/6ee334a9-72c8-4ef3-ba26-81dd1c9206a4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400049ec88}] Aliases:map[]}"
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.10172726Z" level=info msg="Adding pod kube-system_registry-creds-764b6fb674-k267j to CNI network \"kindnet\" (type=ptp)"
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.115778527Z" level=info msg="Got pod network &{Name:registry-creds-764b6fb674-k267j Namespace:kube-system ID:442e4d5e312fb90ca7d9f91f62d3a0e23bbe26e7b2651f780ffa328b4716ea59 UID:66a2f897-d4c3-4ebf-a15a-51183d31deaa NetNS:/var/run/netns/6ee334a9-72c8-4ef3-ba26-81dd1c9206a4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400049ec88}] Aliases:map[]}"
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.119128925Z" level=info msg="Checking pod kube-system_registry-creds-764b6fb674-k267j for CNI network kindnet (type=ptp)"
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.235340492Z" level=info msg="Ran pod sandbox 442e4d5e312fb90ca7d9f91f62d3a0e23bbe26e7b2651f780ffa328b4716ea59 with infra container: kube-system/registry-creds-764b6fb674-k267j/POD" id=b21b1e3d-798b-4ee4-b0a8-0777999ce0ff name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.239180324Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=61d27c7e-1d32-48b8-9b83-a745d1bde1f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.239494511Z" level=info msg="Image docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 not found" id=61d27c7e-1d32-48b8-9b83-a745d1bde1f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.239621569Z" level=info msg="Neither image nor artfiact docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 found" id=61d27c7e-1d32-48b8-9b83-a745d1bde1f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.309520557Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=25d1cf08-7480-4d3c-8faa-e99a112a2dbd name=/runtime.v1.ImageService/PullImage
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.310089336Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=f7c1ea54-0da6-4524-a94f-3e259a8ceb7e name=/runtime.v1.ImageService/ImageStatus
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.313076053Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=827fad70-cdb5-44ea-aa8c-ed0d9b1cd4a0 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.314011402Z" level=info msg="Pulling image: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=dd560d7f-6462-4e68-911c-2b7ff6eed15a name=/runtime.v1.ImageService/PullImage
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.317230048Z" level=info msg="Trying to access \"docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605\""
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.323023533Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-ppbs2/hello-world-app" id=d3c818f2-a293-43d8-8fb2-b3ae8e51a0c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.323868493Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.354588639Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.36137312Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0112aaa5695a8202f07d57ccb79acb6e090e94ecf53592cf743117254024eb07/merged/etc/passwd: no such file or directory"
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.361405654Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0112aaa5695a8202f07d57ccb79acb6e090e94ecf53592cf743117254024eb07/merged/etc/group: no such file or directory"
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.361692928Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.399184484Z" level=info msg="Created container ccee074e4d66d81797f974f438f05bc3e33c29c77848ae33e5859a69f37a59fd: default/hello-world-app-5d498dc89-ppbs2/hello-world-app" id=d3c818f2-a293-43d8-8fb2-b3ae8e51a0c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.402343093Z" level=info msg="Starting container: ccee074e4d66d81797f974f438f05bc3e33c29c77848ae33e5859a69f37a59fd" id=4d8009e8-335e-4802-bb6c-781130653897 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.407350023Z" level=info msg="Started container" PID=7104 containerID=ccee074e4d66d81797f974f438f05bc3e33c29c77848ae33e5859a69f37a59fd description=default/hello-world-app-5d498dc89-ppbs2/hello-world-app id=4d8009e8-335e-4802-bb6c-781130653897 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e538be7364cfe0facede7d94186ccbdbd4e742df81739e40fbc7e7e00dcd0a61
	Oct 18 17:17:58 addons-164474 crio[831]: time="2025-10-18T17:17:58.549236061Z" level=info msg="Image operating system mismatch: image uses OS \"linux\"+architecture \"amd64\"+\"\", expecting one of \"linux+arm64+\\\"v8\\\", linux+arm64+\\\"\\\"\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	ccee074e4d66d       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        1 second ago        Running             hello-world-app                          0                   e538be7364cfe       hello-world-app-5d498dc89-ppbs2             default
	905fc9b994d84       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                                              2 minutes ago       Running             nginx                                    0                   26eaa7a0c3afe       nginx                                       default
	6cd2dd6d87a85       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago       Running             busybox                                  0                   1281c408b40f3       busybox                                     default
	968c95a146a7f       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago       Running             csi-snapshotter                          0                   e97e4d507df8e       csi-hostpathplugin-9l87p                    kube-system
	7657f768a8a9a       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago       Running             csi-provisioner                          0                   e97e4d507df8e       csi-hostpathplugin-9l87p                    kube-system
	a4777ba56bbe1       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago       Running             liveness-probe                           0                   e97e4d507df8e       csi-hostpathplugin-9l87p                    kube-system
	cdf72845ca4f0       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago       Running             hostpath                                 0                   e97e4d507df8e       csi-hostpathplugin-9l87p                    kube-system
	48dfa16c4c6d7       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago       Running             controller                               0                   5f13b9e551841       ingress-nginx-controller-675c5ddd98-9vsqk   ingress-nginx
	465d642f21c7a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago       Running             gcp-auth                                 0                   1528d396f0f9f       gcp-auth-78565c9fb4-4hmpz                   gcp-auth
	cbf41849e12c0       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago       Running             node-driver-registrar                    0                   e97e4d507df8e       csi-hostpathplugin-9l87p                    kube-system
	5a594f8d1f286       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago       Running             gadget                                   0                   5bd19d7b95a8b       gadget-sh2jw                                gadget
	cd1c762de0b5d       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago       Running             registry-proxy                           0                   4a23982b51523       registry-proxy-6x6dm                        kube-system
	f97b941babec4       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     3 minutes ago       Running             nvidia-device-plugin-ctr                 0                   f54e1b3b0913d       nvidia-device-plugin-daemonset-w6sqz        kube-system
	c763b99ed4a70       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago       Running             registry                                 0                   dcf177613e7c0       registry-6b586f9694-fwkz8                   kube-system
	26297b4bb5620       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago       Running             csi-external-health-monitor-controller   0                   e97e4d507df8e       csi-hostpathplugin-9l87p                    kube-system
	ae65eedb55756       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago       Exited              patch                                    0                   4fa21d6c5ff65       ingress-nginx-admission-patch-xdpbv         ingress-nginx
	14f2f76f82dc9       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago       Running             volume-snapshot-controller               0                   cc41371971419       snapshot-controller-7d9fbc56b8-f8bm6        kube-system
	676ff2293e1f8       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago       Running             local-path-provisioner                   0                   e148bce858711       local-path-provisioner-648f6765c9-dssgc     local-path-storage
	901f9bb2898fa       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago       Running             csi-attacher                             0                   a809314725ed4       csi-hostpath-attacher-0                     kube-system
	8a59f8ac6ef28       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago       Running             minikube-ingress-dns                     0                   5832b0282c647       kube-ingress-dns-minikube                   kube-system
	ac84eee03f897       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago       Exited              create                                   0                   95ab4ffe1ec2f       ingress-nginx-admission-create-9qw6v        ingress-nginx
	ce684ca523f08       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago       Running             volume-snapshot-controller               0                   f9e45235974e7       snapshot-controller-7d9fbc56b8-gnvj9        kube-system
	137ab15901ce8       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               3 minutes ago       Running             cloud-spanner-emulator                   0                   0bfeed615e7eb       cloud-spanner-emulator-86bd5cbb97-gwtv9     default
	f4d16c0746e45       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago       Running             yakd                                     0                   842c5d10e4082       yakd-dashboard-5ff678cb9-v54wx              yakd-dashboard
	f402fe3063f55       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              4 minutes ago       Running             csi-resizer                              0                   0e93a9af74be2       csi-hostpath-resizer-0                      kube-system
	6865806b912ab       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago       Running             metrics-server                           0                   7e18f7c741efd       metrics-server-85b7d694d7-8dnml             kube-system
	8d07fa8a1c45f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago       Running             storage-provisioner                      0                   246d92ae28bfb       storage-provisioner                         kube-system
	ece6fd8e36b74       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago       Running             coredns                                  0                   f94b0bcd8d10c       coredns-66bc5c9577-467ch                    kube-system
	d12b84a601116       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago       Running             kindnet-cni                              0                   58badd650ed5d       kindnet-hsvb9                               kube-system
	d87115dc1b972       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago       Running             kube-proxy                               0                   9feea4b7319dd       kube-proxy-ccs4c                            kube-system
	07f016f168b62       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago       Running             kube-apiserver                           0                   3383374da1a74       kube-apiserver-addons-164474                kube-system
	f085bccd65219       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago       Running             etcd                                     0                   aa4342b95837d       etcd-addons-164474                          kube-system
	4a1b92f8cd14a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago       Running             kube-controller-manager                  0                   1c2b2d461293d       kube-controller-manager-addons-164474       kube-system
	246aa3ddddf57       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago       Running             kube-scheduler                           0                   50bde38df0541       kube-scheduler-addons-164474                kube-system
	
	
	==> coredns [ece6fd8e36b7414b9ea8a96fa9d85543498f89e17705fa3bc262b1570f482b24] <==
	[INFO] 10.244.0.18:51212 - 58804 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002463706s
	[INFO] 10.244.0.18:51212 - 59680 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000124703s
	[INFO] 10.244.0.18:51212 - 51654 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000081109s
	[INFO] 10.244.0.18:42452 - 16273 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00013834s
	[INFO] 10.244.0.18:42452 - 16519 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00018703s
	[INFO] 10.244.0.18:59308 - 9218 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000113462s
	[INFO] 10.244.0.18:59308 - 9046 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000164622s
	[INFO] 10.244.0.18:39552 - 61561 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000111345s
	[INFO] 10.244.0.18:39552 - 61356 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000085211s
	[INFO] 10.244.0.18:36246 - 55836 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001374994s
	[INFO] 10.244.0.18:36246 - 56264 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001622546s
	[INFO] 10.244.0.18:50331 - 61718 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000105921s
	[INFO] 10.244.0.18:50331 - 61603 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000149425s
	[INFO] 10.244.0.20:32993 - 701 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000145536s
	[INFO] 10.244.0.20:47126 - 30960 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000106546s
	[INFO] 10.244.0.20:53787 - 54326 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000112781s
	[INFO] 10.244.0.20:58302 - 44288 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000099801s
	[INFO] 10.244.0.20:53817 - 43948 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117073s
	[INFO] 10.244.0.20:60921 - 10992 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000106184s
	[INFO] 10.244.0.20:49984 - 51656 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002101864s
	[INFO] 10.244.0.20:43300 - 52521 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001596659s
	[INFO] 10.244.0.20:49777 - 36046 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002583167s
	[INFO] 10.244.0.20:33970 - 60645 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002864615s
	[INFO] 10.244.0.23:54444 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000370286s
	[INFO] 10.244.0.23:58159 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000104485s
	
	
	==> describe nodes <==
	Name:               addons-164474
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-164474
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=addons-164474
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T17_13_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-164474
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-164474"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:12:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-164474
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:17:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 17:17:26 +0000   Sat, 18 Oct 2025 17:12:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 17:17:26 +0000   Sat, 18 Oct 2025 17:12:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 17:17:26 +0000   Sat, 18 Oct 2025 17:12:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 17:17:26 +0000   Sat, 18 Oct 2025 17:13:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-164474
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                bcbaa3b1-55d6-41b1-a200-9f6a4cc99665
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	  default                     cloud-spanner-emulator-86bd5cbb97-gwtv9      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  default                     hello-world-app-5d498dc89-ppbs2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-sh2jw                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  gcp-auth                    gcp-auth-78565c9fb4-4hmpz                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-9vsqk    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m47s
	  kube-system                 coredns-66bc5c9577-467ch                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m53s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 csi-hostpathplugin-9l87p                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 etcd-addons-164474                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m58s
	  kube-system                 kindnet-hsvb9                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m53s
	  kube-system                 kube-apiserver-addons-164474                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-controller-manager-addons-164474        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 kube-proxy-ccs4c                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 kube-scheduler-addons-164474                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 metrics-server-85b7d694d7-8dnml              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m49s
	  kube-system                 nvidia-device-plugin-daemonset-w6sqz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 registry-6b586f9694-fwkz8                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 registry-creds-764b6fb674-k267j              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 registry-proxy-6x6dm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 snapshot-controller-7d9fbc56b8-f8bm6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 snapshot-controller-7d9fbc56b8-gnvj9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  local-path-storage          local-path-provisioner-648f6765c9-dssgc      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-v54wx               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 4m52s                kube-proxy       
	  Normal   NodeHasSufficientMemory  5m4s (x8 over 5m5s)  kubelet          Node addons-164474 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m4s (x8 over 5m5s)  kubelet          Node addons-164474 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m4s (x8 over 5m5s)  kubelet          Node addons-164474 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m58s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m58s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m58s                kubelet          Node addons-164474 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m58s                kubelet          Node addons-164474 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m58s                kubelet          Node addons-164474 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m54s                node-controller  Node addons-164474 event: Registered Node addons-164474 in Controller
	  Normal   NodeReady                4m12s                kubelet          Node addons-164474 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014995] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035776] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.808632] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.418900] kauditd_printk_skb: 36 callbacks suppressed
	[Oct18 17:12] overlayfs: idmapped layers are currently not supported
	[  +0.082393] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [f085bccd65219cd8bb8d59ffcc8bee71589bead44d17e3e6fe5269fe6781f2f3] <==
	{"level":"warn","ts":"2025-10-18T17:12:57.118566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.132140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.145853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.176193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.186037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.203465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.226225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.242513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.253116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.278512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.290001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.306618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.328529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.343996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.361991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.385863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.429145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.447507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.498551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:13:13.554196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:13:13.587244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:13:35.295316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:13:35.329517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:13:35.345028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:13:35.359900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41326","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [465d642f21c7ad346b0ec9f2b4225bbe6eb67e9bbd2b751784c0bde16c473589] <==
	2025/10/18 17:14:45 GCP Auth Webhook started!
	2025/10/18 17:14:59 Ready to marshal response ...
	2025/10/18 17:14:59 Ready to write response ...
	2025/10/18 17:15:00 Ready to marshal response ...
	2025/10/18 17:15:00 Ready to write response ...
	2025/10/18 17:15:00 Ready to marshal response ...
	2025/10/18 17:15:00 Ready to write response ...
	2025/10/18 17:15:19 Ready to marshal response ...
	2025/10/18 17:15:19 Ready to write response ...
	2025/10/18 17:15:24 Ready to marshal response ...
	2025/10/18 17:15:24 Ready to write response ...
	2025/10/18 17:15:24 Ready to marshal response ...
	2025/10/18 17:15:24 Ready to write response ...
	2025/10/18 17:15:32 Ready to marshal response ...
	2025/10/18 17:15:32 Ready to write response ...
	2025/10/18 17:15:37 Ready to marshal response ...
	2025/10/18 17:15:37 Ready to write response ...
	2025/10/18 17:15:47 Ready to marshal response ...
	2025/10/18 17:15:47 Ready to write response ...
	2025/10/18 17:16:10 Ready to marshal response ...
	2025/10/18 17:16:10 Ready to write response ...
	2025/10/18 17:17:57 Ready to marshal response ...
	2025/10/18 17:17:57 Ready to write response ...
	
	
	==> kernel <==
	 17:17:59 up  1:00,  0 user,  load average: 1.01, 0.90, 0.47
	Linux addons-164474 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d12b84a60111629c5268442b96bd59e440c9aec3f86f326d9528b07daa476596] <==
	I1018 17:15:57.209047       1 main.go:301] handling current node
	I1018 17:16:07.206132       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:16:07.206231       1 main.go:301] handling current node
	I1018 17:16:17.205591       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:16:17.205622       1 main.go:301] handling current node
	I1018 17:16:27.212680       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:16:27.212714       1 main.go:301] handling current node
	I1018 17:16:37.214274       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:16:37.214308       1 main.go:301] handling current node
	I1018 17:16:47.209624       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:16:47.209661       1 main.go:301] handling current node
	I1018 17:16:57.214256       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:16:57.214290       1 main.go:301] handling current node
	I1018 17:17:07.205736       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:17:07.205853       1 main.go:301] handling current node
	I1018 17:17:17.206069       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:17:17.206131       1 main.go:301] handling current node
	I1018 17:17:27.206104       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:17:27.206138       1 main.go:301] handling current node
	I1018 17:17:37.209991       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:17:37.210023       1 main.go:301] handling current node
	I1018 17:17:47.206627       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:17:47.206674       1 main.go:301] handling current node
	I1018 17:17:57.215049       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:17:57.215153       1 main.go:301] handling current node
	
	
	==> kube-apiserver [07f016f168b62771c5ab60ab8215041fcead58a20ef1da5932bcb8d6da58077f] <==
	 > logger="UnhandledError"
	W1018 17:13:55.625052       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 17:13:55.625098       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1018 17:13:55.625111       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1018 17:13:55.625196       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 17:13:55.625266       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1018 17:13:55.626326       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1018 17:13:59.630026       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.50.17:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.50.17:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W1018 17:13:59.630363       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 17:13:59.630399       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1018 17:13:59.687979       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1018 17:13:59.711343       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1018 17:15:08.749229       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54228: use of closed network connection
	E1018 17:15:09.003407       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54258: use of closed network connection
	I1018 17:15:36.755202       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1018 17:15:37.151706       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.50.217"}
	I1018 17:15:59.267422       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1018 17:16:18.661637       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1018 17:17:57.446107       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.95.96"}
	
	
	==> kube-controller-manager [4a1b92f8cd14a17c1e2790e1ca03a5608e43fb0ee84dba04aae2757215b8f043] <==
	I1018 17:13:05.308359       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-164474" podCIDRs=["10.244.0.0/24"]
	I1018 17:13:05.309657       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 17:13:05.309749       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 17:13:05.314159       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 17:13:05.314257       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 17:13:05.315436       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 17:13:05.315852       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 17:13:05.315919       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 17:13:05.315929       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 17:13:05.316088       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 17:13:05.316128       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 17:13:05.321023       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 17:13:05.321140       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 17:13:05.321494       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 17:13:05.321523       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 17:13:05.321560       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	E1018 17:13:10.833728       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1018 17:13:35.285763       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 17:13:35.285923       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1018 17:13:35.285966       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1018 17:13:35.317550       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1018 17:13:35.326637       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 17:13:35.386997       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 17:13:35.427080       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 17:13:50.318836       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d87115dc1b972147e18ebd00d21f7d791e5831c69fbef5f5e25fb2fade668bf7] <==
	I1018 17:13:07.237691       1 server_linux.go:53] "Using iptables proxy"
	I1018 17:13:07.354749       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 17:13:07.455784       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 17:13:07.455823       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 17:13:07.455908       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 17:13:07.499064       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 17:13:07.499142       1 server_linux.go:132] "Using iptables Proxier"
	I1018 17:13:07.510044       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 17:13:07.524225       1 server.go:527] "Version info" version="v1.34.1"
	I1018 17:13:07.524258       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 17:13:07.536647       1 config.go:200] "Starting service config controller"
	I1018 17:13:07.536667       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 17:13:07.536693       1 config.go:106] "Starting endpoint slice config controller"
	I1018 17:13:07.536697       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 17:13:07.536705       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 17:13:07.536708       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 17:13:07.539862       1 config.go:309] "Starting node config controller"
	I1018 17:13:07.539879       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 17:13:07.539886       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 17:13:07.637516       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 17:13:07.637547       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 17:13:07.637560       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [246aa3ddddf57033502d5fd5679ade1ae4e79cefdfdc7645841ea4f17a3e0313] <==
	E1018 17:12:58.370100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 17:12:58.370164       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 17:12:58.370217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 17:12:58.370233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 17:12:58.370299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 17:12:58.370308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 17:12:58.370373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 17:12:58.370406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 17:12:58.370485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 17:12:58.370544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 17:12:58.371096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 17:12:58.371191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 17:12:58.371296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 17:12:58.372034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 17:12:59.165179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 17:12:59.237421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 17:12:59.281284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 17:12:59.297283       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 17:12:59.453088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 17:12:59.499485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 17:12:59.524414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 17:12:59.536626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 17:12:59.540310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 17:12:59.552642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1018 17:13:01.921868       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 17:16:11 addons-164474 kubelet[1270]: I1018 17:16:11.327286    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=0.953569003 podStartE2EDuration="1.327267623s" podCreationTimestamp="2025-10-18 17:16:10 +0000 UTC" firstStartedPulling="2025-10-18 17:16:10.37263639 +0000 UTC m=+189.311652515" lastFinishedPulling="2025-10-18 17:16:10.746335001 +0000 UTC m=+189.685351135" observedRunningTime="2025-10-18 17:16:11.326379332 +0000 UTC m=+190.265395466" watchObservedRunningTime="2025-10-18 17:16:11.327267623 +0000 UTC m=+190.266283757"
	Oct 18 17:16:18 addons-164474 kubelet[1270]: I1018 17:16:18.340869    1270 scope.go:117] "RemoveContainer" containerID="2ef158c6588721641166f50b2697bf4c4d27e62ab0aa449583610c396e157ea9"
	Oct 18 17:16:18 addons-164474 kubelet[1270]: I1018 17:16:18.341097    1270 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b9f81bd7-0754-4ba1-a332-77af31585c97-gcp-creds\") pod \"b9f81bd7-0754-4ba1-a332-77af31585c97\" (UID: \"b9f81bd7-0754-4ba1-a332-77af31585c97\") "
	Oct 18 17:16:18 addons-164474 kubelet[1270]: I1018 17:16:18.341234    1270 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^237bc8cb-ac46-11f0-ba85-862cc2e5164f\") pod \"b9f81bd7-0754-4ba1-a332-77af31585c97\" (UID: \"b9f81bd7-0754-4ba1-a332-77af31585c97\") "
	Oct 18 17:16:18 addons-164474 kubelet[1270]: I1018 17:16:18.341263    1270 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c7vl\" (UniqueName: \"kubernetes.io/projected/b9f81bd7-0754-4ba1-a332-77af31585c97-kube-api-access-5c7vl\") pod \"b9f81bd7-0754-4ba1-a332-77af31585c97\" (UID: \"b9f81bd7-0754-4ba1-a332-77af31585c97\") "
	Oct 18 17:16:18 addons-164474 kubelet[1270]: I1018 17:16:18.342069    1270 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9f81bd7-0754-4ba1-a332-77af31585c97-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "b9f81bd7-0754-4ba1-a332-77af31585c97" (UID: "b9f81bd7-0754-4ba1-a332-77af31585c97"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 18 17:16:18 addons-164474 kubelet[1270]: I1018 17:16:18.347921    1270 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9f81bd7-0754-4ba1-a332-77af31585c97-kube-api-access-5c7vl" (OuterVolumeSpecName: "kube-api-access-5c7vl") pod "b9f81bd7-0754-4ba1-a332-77af31585c97" (UID: "b9f81bd7-0754-4ba1-a332-77af31585c97"). InnerVolumeSpecName "kube-api-access-5c7vl". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 18 17:16:18 addons-164474 kubelet[1270]: I1018 17:16:18.360626    1270 scope.go:117] "RemoveContainer" containerID="2ef158c6588721641166f50b2697bf4c4d27e62ab0aa449583610c396e157ea9"
	Oct 18 17:16:18 addons-164474 kubelet[1270]: E1018 17:16:18.361982    1270 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ef158c6588721641166f50b2697bf4c4d27e62ab0aa449583610c396e157ea9\": container with ID starting with 2ef158c6588721641166f50b2697bf4c4d27e62ab0aa449583610c396e157ea9 not found: ID does not exist" containerID="2ef158c6588721641166f50b2697bf4c4d27e62ab0aa449583610c396e157ea9"
	Oct 18 17:16:18 addons-164474 kubelet[1270]: I1018 17:16:18.362154    1270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ef158c6588721641166f50b2697bf4c4d27e62ab0aa449583610c396e157ea9"} err="failed to get container status \"2ef158c6588721641166f50b2697bf4c4d27e62ab0aa449583610c396e157ea9\": rpc error: code = NotFound desc = could not find container \"2ef158c6588721641166f50b2697bf4c4d27e62ab0aa449583610c396e157ea9\": container with ID starting with 2ef158c6588721641166f50b2697bf4c4d27e62ab0aa449583610c396e157ea9 not found: ID does not exist"
	Oct 18 17:16:18 addons-164474 kubelet[1270]: I1018 17:16:18.365341    1270 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^237bc8cb-ac46-11f0-ba85-862cc2e5164f" (OuterVolumeSpecName: "task-pv-storage") pod "b9f81bd7-0754-4ba1-a332-77af31585c97" (UID: "b9f81bd7-0754-4ba1-a332-77af31585c97"). InnerVolumeSpecName "pvc-4a62e43d-e46a-41fb-a581-6e87a4a9ea05". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Oct 18 17:16:18 addons-164474 kubelet[1270]: I1018 17:16:18.442129    1270 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5c7vl\" (UniqueName: \"kubernetes.io/projected/b9f81bd7-0754-4ba1-a332-77af31585c97-kube-api-access-5c7vl\") on node \"addons-164474\" DevicePath \"\""
	Oct 18 17:16:18 addons-164474 kubelet[1270]: I1018 17:16:18.442174    1270 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b9f81bd7-0754-4ba1-a332-77af31585c97-gcp-creds\") on node \"addons-164474\" DevicePath \"\""
	Oct 18 17:16:18 addons-164474 kubelet[1270]: I1018 17:16:18.442207    1270 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-4a62e43d-e46a-41fb-a581-6e87a4a9ea05\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^237bc8cb-ac46-11f0-ba85-862cc2e5164f\") on node \"addons-164474\" "
	Oct 18 17:16:18 addons-164474 kubelet[1270]: I1018 17:16:18.447265    1270 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-4a62e43d-e46a-41fb-a581-6e87a4a9ea05" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^237bc8cb-ac46-11f0-ba85-862cc2e5164f") on node "addons-164474"
	Oct 18 17:16:18 addons-164474 kubelet[1270]: I1018 17:16:18.543560    1270 reconciler_common.go:299] "Volume detached for volume \"pvc-4a62e43d-e46a-41fb-a581-6e87a4a9ea05\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^237bc8cb-ac46-11f0-ba85-862cc2e5164f\") on node \"addons-164474\" DevicePath \"\""
	Oct 18 17:16:19 addons-164474 kubelet[1270]: I1018 17:16:19.188863    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9f81bd7-0754-4ba1-a332-77af31585c97" path="/var/lib/kubelet/pods/b9f81bd7-0754-4ba1-a332-77af31585c97/volumes"
	Oct 18 17:16:44 addons-164474 kubelet[1270]: I1018 17:16:44.183738    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-fwkz8" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 17:17:23 addons-164474 kubelet[1270]: I1018 17:17:23.184492    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-6x6dm" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 17:17:24 addons-164474 kubelet[1270]: I1018 17:17:24.184050    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-w6sqz" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 17:17:57 addons-164474 kubelet[1270]: I1018 17:17:57.394075    1270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7f3a7f5d-ba17-4744-8107-1415b8581e69-gcp-creds\") pod \"hello-world-app-5d498dc89-ppbs2\" (UID: \"7f3a7f5d-ba17-4744-8107-1415b8581e69\") " pod="default/hello-world-app-5d498dc89-ppbs2"
	Oct 18 17:17:57 addons-164474 kubelet[1270]: I1018 17:17:57.394628    1270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vmz2\" (UniqueName: \"kubernetes.io/projected/7f3a7f5d-ba17-4744-8107-1415b8581e69-kube-api-access-4vmz2\") pod \"hello-world-app-5d498dc89-ppbs2\" (UID: \"7f3a7f5d-ba17-4744-8107-1415b8581e69\") " pod="default/hello-world-app-5d498dc89-ppbs2"
	Oct 18 17:17:58 addons-164474 kubelet[1270]: I1018 17:17:58.084479    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-k267j" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 17:17:58 addons-164474 kubelet[1270]: I1018 17:17:58.184363    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-fwkz8" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 17:17:58 addons-164474 kubelet[1270]: W1018 17:17:58.235062    1270 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/31000ccc16f2da54474476b9a5eeb51132587beec766c8579e875c01b1c476ea/crio-442e4d5e312fb90ca7d9f91f62d3a0e23bbe26e7b2651f780ffa328b4716ea59 WatchSource:0}: Error finding container 442e4d5e312fb90ca7d9f91f62d3a0e23bbe26e7b2651f780ffa328b4716ea59: Status 404 returned error can't find the container with id 442e4d5e312fb90ca7d9f91f62d3a0e23bbe26e7b2651f780ffa328b4716ea59
	
	
	==> storage-provisioner [8d07fa8a1c45fb2b7f3f20b332023c1b057391cd1e4435eb47db001464e9ada7] <==
	W1018 17:17:35.932257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:17:37.935493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:17:37.940042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:17:39.943604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:17:39.950290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:17:41.953645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:17:41.958916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:17:43.962545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:17:43.969115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:17:45.972373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:17:45.979024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:17:47.981780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:17:47.992836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:17:49.996254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:17:50.002530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:17:52.010208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:17:52.015071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:17:54.017807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:17:54.022540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:17:56.026043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:17:56.030611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:17:58.036700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:17:58.052096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:18:00.065484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:18:00.112514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-164474 -n addons-164474
helpers_test.go:269: (dbg) Run:  kubectl --context addons-164474 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-9qw6v ingress-nginx-admission-patch-xdpbv
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-164474 describe pod ingress-nginx-admission-create-9qw6v ingress-nginx-admission-patch-xdpbv
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-164474 describe pod ingress-nginx-admission-create-9qw6v ingress-nginx-admission-patch-xdpbv: exit status 1 (132.743205ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-9qw6v" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xdpbv" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-164474 describe pod ingress-nginx-admission-create-9qw6v ingress-nginx-admission-patch-xdpbv: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-164474 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-164474 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (333.600799ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 17:18:01.506008   14718 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:18:01.506310   14718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:18:01.506343   14718 out.go:374] Setting ErrFile to fd 2...
	I1018 17:18:01.506361   14718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:18:01.506660   14718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:18:01.506997   14718 mustload.go:65] Loading cluster: addons-164474
	I1018 17:18:01.507518   14718 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:18:01.507577   14718 addons.go:606] checking whether the cluster is paused
	I1018 17:18:01.507755   14718 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:18:01.507809   14718 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:18:01.508447   14718 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:18:01.547442   14718 ssh_runner.go:195] Run: systemctl --version
	I1018 17:18:01.547537   14718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:18:01.578294   14718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:18:01.683906   14718 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:18:01.683998   14718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:18:01.729204   14718 cri.go:89] found id: "b08639acc4b919cd6a03e2f867138c8867c672b691deaa7e72836d00940fb241"
	I1018 17:18:01.729257   14718 cri.go:89] found id: "968c95a146a7fe08d5189eee29bd9582f2894b5d0f04e78e794058e86e194f17"
	I1018 17:18:01.729265   14718 cri.go:89] found id: "7657f768a8a9ac41bcd1f5e7a196579a7dcf31f08b605bba0bd11acb46369892"
	I1018 17:18:01.729269   14718 cri.go:89] found id: "a4777ba56bbe130ce2d0759f981f7a5a7a81a6f76b26c9602759d75786f28075"
	I1018 17:18:01.729273   14718 cri.go:89] found id: "cdf72845ca4f04b7f38a96e8e2bc2c5bff55db097097fe86438572754061e4d1"
	I1018 17:18:01.729277   14718 cri.go:89] found id: "cbf41849e12c028d15eee86acc3c0fcaf5d31af35d656b7935de4a45730fb182"
	I1018 17:18:01.729280   14718 cri.go:89] found id: "cd1c762de0b5dd26a00d004eb60c3a0356920d2d898bf210120e83239de379d3"
	I1018 17:18:01.729283   14718 cri.go:89] found id: "f97b941babec4dfdf104ffdbe7459e396a64a17a6edfa11989d9170c5b5365e2"
	I1018 17:18:01.729287   14718 cri.go:89] found id: "c763b99ed4a70e785446e888023cdfabc0fdeb6e7dcb1a84844d98d22b841291"
	I1018 17:18:01.729293   14718 cri.go:89] found id: "26297b4bb562054967554961013b4aecf4a819a64b9615266425ddb33797d349"
	I1018 17:18:01.729297   14718 cri.go:89] found id: "14f2f76f82dc964b8b157e088100913e80feaa2be642ecc8b72fea78bd2a0ed1"
	I1018 17:18:01.729300   14718 cri.go:89] found id: "901f9bb2898fac636a6903ad516f9b140591198721e4e2bfd30c9ab9155a01ed"
	I1018 17:18:01.729304   14718 cri.go:89] found id: "8a59f8ac6ef2822e7088c9cd1a68272c147739f96eaf27abf4a85d43c140b0ea"
	I1018 17:18:01.729308   14718 cri.go:89] found id: "ce684ca523f08f4af3d1134e239085b099cc9e2cd0f8679963ba4f111fcf7567"
	I1018 17:18:01.729312   14718 cri.go:89] found id: "f402fe3063f55e7003a2aaac453c55c6b2139f8fa75d1a062b447a4a5a8f278c"
	I1018 17:18:01.729317   14718 cri.go:89] found id: "6865806b912ab6d902d766fb60959288c01cc7c01f0f6d41ece13a1484e43f45"
	I1018 17:18:01.729325   14718 cri.go:89] found id: "8d07fa8a1c45fb2b7f3f20b332023c1b057391cd1e4435eb47db001464e9ada7"
	I1018 17:18:01.729330   14718 cri.go:89] found id: "ece6fd8e36b7414b9ea8a96fa9d85543498f89e17705fa3bc262b1570f482b24"
	I1018 17:18:01.729333   14718 cri.go:89] found id: "d12b84a60111629c5268442b96bd59e440c9aec3f86f326d9528b07daa476596"
	I1018 17:18:01.729336   14718 cri.go:89] found id: "d87115dc1b972147e18ebd00d21f7d791e5831c69fbef5f5e25fb2fade668bf7"
	I1018 17:18:01.729341   14718 cri.go:89] found id: "07f016f168b62771c5ab60ab8215041fcead58a20ef1da5932bcb8d6da58077f"
	I1018 17:18:01.729344   14718 cri.go:89] found id: "f085bccd65219cd8bb8d59ffcc8bee71589bead44d17e3e6fe5269fe6781f2f3"
	I1018 17:18:01.729348   14718 cri.go:89] found id: "4a1b92f8cd14a17c1e2790e1ca03a5608e43fb0ee84dba04aae2757215b8f043"
	I1018 17:18:01.729355   14718 cri.go:89] found id: "246aa3ddddf57033502d5fd5679ade1ae4e79cefdfdc7645841ea4f17a3e0313"
	I1018 17:18:01.729358   14718 cri.go:89] found id: ""
	I1018 17:18:01.729412   14718 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 17:18:01.748869   14718 out.go:203] 
	W1018 17:18:01.751962   14718 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:18:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:18:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 17:18:01.752005   14718 out.go:285] * 
	* 
	W1018 17:18:01.756306   14718 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 17:18:01.759278   14718 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-164474 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-164474 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-164474 addons disable ingress --alsologtostderr -v=1: exit status 11 (298.454564ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 17:18:01.823787   14764 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:18:01.824000   14764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:18:01.824033   14764 out.go:374] Setting ErrFile to fd 2...
	I1018 17:18:01.824053   14764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:18:01.824323   14764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:18:01.825431   14764 mustload.go:65] Loading cluster: addons-164474
	I1018 17:18:01.825884   14764 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:18:01.825925   14764 addons.go:606] checking whether the cluster is paused
	I1018 17:18:01.826056   14764 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:18:01.826097   14764 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:18:01.826565   14764 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:18:01.844127   14764 ssh_runner.go:195] Run: systemctl --version
	I1018 17:18:01.844178   14764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:18:01.865806   14764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:18:01.971936   14764 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:18:01.972064   14764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:18:02.032021   14764 cri.go:89] found id: "b08639acc4b919cd6a03e2f867138c8867c672b691deaa7e72836d00940fb241"
	I1018 17:18:02.032045   14764 cri.go:89] found id: "968c95a146a7fe08d5189eee29bd9582f2894b5d0f04e78e794058e86e194f17"
	I1018 17:18:02.032050   14764 cri.go:89] found id: "7657f768a8a9ac41bcd1f5e7a196579a7dcf31f08b605bba0bd11acb46369892"
	I1018 17:18:02.032054   14764 cri.go:89] found id: "a4777ba56bbe130ce2d0759f981f7a5a7a81a6f76b26c9602759d75786f28075"
	I1018 17:18:02.032057   14764 cri.go:89] found id: "cdf72845ca4f04b7f38a96e8e2bc2c5bff55db097097fe86438572754061e4d1"
	I1018 17:18:02.032061   14764 cri.go:89] found id: "cbf41849e12c028d15eee86acc3c0fcaf5d31af35d656b7935de4a45730fb182"
	I1018 17:18:02.032064   14764 cri.go:89] found id: "cd1c762de0b5dd26a00d004eb60c3a0356920d2d898bf210120e83239de379d3"
	I1018 17:18:02.032067   14764 cri.go:89] found id: "f97b941babec4dfdf104ffdbe7459e396a64a17a6edfa11989d9170c5b5365e2"
	I1018 17:18:02.032096   14764 cri.go:89] found id: "c763b99ed4a70e785446e888023cdfabc0fdeb6e7dcb1a84844d98d22b841291"
	I1018 17:18:02.032109   14764 cri.go:89] found id: "26297b4bb562054967554961013b4aecf4a819a64b9615266425ddb33797d349"
	I1018 17:18:02.032113   14764 cri.go:89] found id: "14f2f76f82dc964b8b157e088100913e80feaa2be642ecc8b72fea78bd2a0ed1"
	I1018 17:18:02.032116   14764 cri.go:89] found id: "901f9bb2898fac636a6903ad516f9b140591198721e4e2bfd30c9ab9155a01ed"
	I1018 17:18:02.032120   14764 cri.go:89] found id: "8a59f8ac6ef2822e7088c9cd1a68272c147739f96eaf27abf4a85d43c140b0ea"
	I1018 17:18:02.032123   14764 cri.go:89] found id: "ce684ca523f08f4af3d1134e239085b099cc9e2cd0f8679963ba4f111fcf7567"
	I1018 17:18:02.032127   14764 cri.go:89] found id: "f402fe3063f55e7003a2aaac453c55c6b2139f8fa75d1a062b447a4a5a8f278c"
	I1018 17:18:02.032132   14764 cri.go:89] found id: "6865806b912ab6d902d766fb60959288c01cc7c01f0f6d41ece13a1484e43f45"
	I1018 17:18:02.032142   14764 cri.go:89] found id: "8d07fa8a1c45fb2b7f3f20b332023c1b057391cd1e4435eb47db001464e9ada7"
	I1018 17:18:02.032145   14764 cri.go:89] found id: "ece6fd8e36b7414b9ea8a96fa9d85543498f89e17705fa3bc262b1570f482b24"
	I1018 17:18:02.032149   14764 cri.go:89] found id: "d12b84a60111629c5268442b96bd59e440c9aec3f86f326d9528b07daa476596"
	I1018 17:18:02.032152   14764 cri.go:89] found id: "d87115dc1b972147e18ebd00d21f7d791e5831c69fbef5f5e25fb2fade668bf7"
	I1018 17:18:02.032173   14764 cri.go:89] found id: "07f016f168b62771c5ab60ab8215041fcead58a20ef1da5932bcb8d6da58077f"
	I1018 17:18:02.032184   14764 cri.go:89] found id: "f085bccd65219cd8bb8d59ffcc8bee71589bead44d17e3e6fe5269fe6781f2f3"
	I1018 17:18:02.032188   14764 cri.go:89] found id: "4a1b92f8cd14a17c1e2790e1ca03a5608e43fb0ee84dba04aae2757215b8f043"
	I1018 17:18:02.032191   14764 cri.go:89] found id: "246aa3ddddf57033502d5fd5679ade1ae4e79cefdfdc7645841ea4f17a3e0313"
	I1018 17:18:02.032195   14764 cri.go:89] found id: ""
	I1018 17:18:02.032266   14764 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 17:18:02.047939   14764 out.go:203] 
	W1018 17:18:02.051017   14764 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:18:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:18:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 17:18:02.051048   14764 out.go:285] * 
	* 
	W1018 17:18:02.055356   14764 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 17:18:02.058343   14764 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-164474 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.63s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-sh2jw" [e16c3566-c7b6-4eef-b3fc-853747597429] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003809546s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-164474 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-164474 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (263.159738ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 17:16:25.634862   13565 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:16:25.635129   13565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:16:25.635161   13565 out.go:374] Setting ErrFile to fd 2...
	I1018 17:16:25.635181   13565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:16:25.635495   13565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:16:25.635853   13565 mustload.go:65] Loading cluster: addons-164474
	I1018 17:16:25.636383   13565 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:16:25.636425   13565 addons.go:606] checking whether the cluster is paused
	I1018 17:16:25.636580   13565 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:16:25.636618   13565 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:16:25.637203   13565 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:16:25.654283   13565 ssh_runner.go:195] Run: systemctl --version
	I1018 17:16:25.654345   13565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:16:25.671487   13565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:16:25.779270   13565 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:16:25.779347   13565 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:16:25.809147   13565 cri.go:89] found id: "968c95a146a7fe08d5189eee29bd9582f2894b5d0f04e78e794058e86e194f17"
	I1018 17:16:25.809225   13565 cri.go:89] found id: "7657f768a8a9ac41bcd1f5e7a196579a7dcf31f08b605bba0bd11acb46369892"
	I1018 17:16:25.809244   13565 cri.go:89] found id: "a4777ba56bbe130ce2d0759f981f7a5a7a81a6f76b26c9602759d75786f28075"
	I1018 17:16:25.809264   13565 cri.go:89] found id: "cdf72845ca4f04b7f38a96e8e2bc2c5bff55db097097fe86438572754061e4d1"
	I1018 17:16:25.809299   13565 cri.go:89] found id: "cbf41849e12c028d15eee86acc3c0fcaf5d31af35d656b7935de4a45730fb182"
	I1018 17:16:25.809322   13565 cri.go:89] found id: "cd1c762de0b5dd26a00d004eb60c3a0356920d2d898bf210120e83239de379d3"
	I1018 17:16:25.809342   13565 cri.go:89] found id: "f97b941babec4dfdf104ffdbe7459e396a64a17a6edfa11989d9170c5b5365e2"
	I1018 17:16:25.809360   13565 cri.go:89] found id: "c763b99ed4a70e785446e888023cdfabc0fdeb6e7dcb1a84844d98d22b841291"
	I1018 17:16:25.809389   13565 cri.go:89] found id: "26297b4bb562054967554961013b4aecf4a819a64b9615266425ddb33797d349"
	I1018 17:16:25.809412   13565 cri.go:89] found id: "14f2f76f82dc964b8b157e088100913e80feaa2be642ecc8b72fea78bd2a0ed1"
	I1018 17:16:25.809439   13565 cri.go:89] found id: "901f9bb2898fac636a6903ad516f9b140591198721e4e2bfd30c9ab9155a01ed"
	I1018 17:16:25.809471   13565 cri.go:89] found id: "8a59f8ac6ef2822e7088c9cd1a68272c147739f96eaf27abf4a85d43c140b0ea"
	I1018 17:16:25.809494   13565 cri.go:89] found id: "ce684ca523f08f4af3d1134e239085b099cc9e2cd0f8679963ba4f111fcf7567"
	I1018 17:16:25.809512   13565 cri.go:89] found id: "f402fe3063f55e7003a2aaac453c55c6b2139f8fa75d1a062b447a4a5a8f278c"
	I1018 17:16:25.809531   13565 cri.go:89] found id: "6865806b912ab6d902d766fb60959288c01cc7c01f0f6d41ece13a1484e43f45"
	I1018 17:16:25.809560   13565 cri.go:89] found id: "8d07fa8a1c45fb2b7f3f20b332023c1b057391cd1e4435eb47db001464e9ada7"
	I1018 17:16:25.809590   13565 cri.go:89] found id: "ece6fd8e36b7414b9ea8a96fa9d85543498f89e17705fa3bc262b1570f482b24"
	I1018 17:16:25.809611   13565 cri.go:89] found id: "d12b84a60111629c5268442b96bd59e440c9aec3f86f326d9528b07daa476596"
	I1018 17:16:25.809643   13565 cri.go:89] found id: "d87115dc1b972147e18ebd00d21f7d791e5831c69fbef5f5e25fb2fade668bf7"
	I1018 17:16:25.809664   13565 cri.go:89] found id: "07f016f168b62771c5ab60ab8215041fcead58a20ef1da5932bcb8d6da58077f"
	I1018 17:16:25.809684   13565 cri.go:89] found id: "f085bccd65219cd8bb8d59ffcc8bee71589bead44d17e3e6fe5269fe6781f2f3"
	I1018 17:16:25.809700   13565 cri.go:89] found id: "4a1b92f8cd14a17c1e2790e1ca03a5608e43fb0ee84dba04aae2757215b8f043"
	I1018 17:16:25.809732   13565 cri.go:89] found id: "246aa3ddddf57033502d5fd5679ade1ae4e79cefdfdc7645841ea4f17a3e0313"
	I1018 17:16:25.809753   13565 cri.go:89] found id: ""
	I1018 17:16:25.809839   13565 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 17:16:25.823905   13565 out.go:203] 
	W1018 17:16:25.826717   13565 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:16:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:16:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 17:16:25.826740   13565 out.go:285] * 
	* 
	W1018 17:16:25.831110   13565 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 17:16:25.833990   13565 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-164474 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.36s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 6.27853ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-8dnml" [7cf655d5-48cd-488d-9cbd-b19f09925a22] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003456633s
addons_test.go:463: (dbg) Run:  kubectl --context addons-164474 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-164474 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-164474 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (259.976828ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 17:15:36.222988   12461 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:15:36.223299   12461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:15:36.223332   12461 out.go:374] Setting ErrFile to fd 2...
	I1018 17:15:36.223351   12461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:15:36.223634   12461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:15:36.223972   12461 mustload.go:65] Loading cluster: addons-164474
	I1018 17:15:36.224393   12461 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:15:36.224431   12461 addons.go:606] checking whether the cluster is paused
	I1018 17:15:36.224577   12461 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:15:36.224614   12461 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:15:36.225170   12461 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:15:36.243998   12461 ssh_runner.go:195] Run: systemctl --version
	I1018 17:15:36.244056   12461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:15:36.262334   12461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:15:36.364365   12461 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:15:36.364453   12461 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:15:36.400521   12461 cri.go:89] found id: "968c95a146a7fe08d5189eee29bd9582f2894b5d0f04e78e794058e86e194f17"
	I1018 17:15:36.400542   12461 cri.go:89] found id: "7657f768a8a9ac41bcd1f5e7a196579a7dcf31f08b605bba0bd11acb46369892"
	I1018 17:15:36.400546   12461 cri.go:89] found id: "a4777ba56bbe130ce2d0759f981f7a5a7a81a6f76b26c9602759d75786f28075"
	I1018 17:15:36.400551   12461 cri.go:89] found id: "cdf72845ca4f04b7f38a96e8e2bc2c5bff55db097097fe86438572754061e4d1"
	I1018 17:15:36.400558   12461 cri.go:89] found id: "cbf41849e12c028d15eee86acc3c0fcaf5d31af35d656b7935de4a45730fb182"
	I1018 17:15:36.400562   12461 cri.go:89] found id: "cd1c762de0b5dd26a00d004eb60c3a0356920d2d898bf210120e83239de379d3"
	I1018 17:15:36.400565   12461 cri.go:89] found id: "f97b941babec4dfdf104ffdbe7459e396a64a17a6edfa11989d9170c5b5365e2"
	I1018 17:15:36.400570   12461 cri.go:89] found id: "c763b99ed4a70e785446e888023cdfabc0fdeb6e7dcb1a84844d98d22b841291"
	I1018 17:15:36.400573   12461 cri.go:89] found id: "26297b4bb562054967554961013b4aecf4a819a64b9615266425ddb33797d349"
	I1018 17:15:36.400582   12461 cri.go:89] found id: "14f2f76f82dc964b8b157e088100913e80feaa2be642ecc8b72fea78bd2a0ed1"
	I1018 17:15:36.400585   12461 cri.go:89] found id: "901f9bb2898fac636a6903ad516f9b140591198721e4e2bfd30c9ab9155a01ed"
	I1018 17:15:36.400589   12461 cri.go:89] found id: "8a59f8ac6ef2822e7088c9cd1a68272c147739f96eaf27abf4a85d43c140b0ea"
	I1018 17:15:36.400593   12461 cri.go:89] found id: "ce684ca523f08f4af3d1134e239085b099cc9e2cd0f8679963ba4f111fcf7567"
	I1018 17:15:36.400596   12461 cri.go:89] found id: "f402fe3063f55e7003a2aaac453c55c6b2139f8fa75d1a062b447a4a5a8f278c"
	I1018 17:15:36.400599   12461 cri.go:89] found id: "6865806b912ab6d902d766fb60959288c01cc7c01f0f6d41ece13a1484e43f45"
	I1018 17:15:36.400604   12461 cri.go:89] found id: "8d07fa8a1c45fb2b7f3f20b332023c1b057391cd1e4435eb47db001464e9ada7"
	I1018 17:15:36.400610   12461 cri.go:89] found id: "ece6fd8e36b7414b9ea8a96fa9d85543498f89e17705fa3bc262b1570f482b24"
	I1018 17:15:36.400613   12461 cri.go:89] found id: "d12b84a60111629c5268442b96bd59e440c9aec3f86f326d9528b07daa476596"
	I1018 17:15:36.400616   12461 cri.go:89] found id: "d87115dc1b972147e18ebd00d21f7d791e5831c69fbef5f5e25fb2fade668bf7"
	I1018 17:15:36.400619   12461 cri.go:89] found id: "07f016f168b62771c5ab60ab8215041fcead58a20ef1da5932bcb8d6da58077f"
	I1018 17:15:36.400624   12461 cri.go:89] found id: "f085bccd65219cd8bb8d59ffcc8bee71589bead44d17e3e6fe5269fe6781f2f3"
	I1018 17:15:36.400627   12461 cri.go:89] found id: "4a1b92f8cd14a17c1e2790e1ca03a5608e43fb0ee84dba04aae2757215b8f043"
	I1018 17:15:36.400631   12461 cri.go:89] found id: "246aa3ddddf57033502d5fd5679ade1ae4e79cefdfdc7645841ea4f17a3e0313"
	I1018 17:15:36.400634   12461 cri.go:89] found id: ""
	I1018 17:15:36.400683   12461 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 17:15:36.416657   12461 out.go:203] 
	W1018 17:15:36.419478   12461 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:15:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:15:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 17:15:36.419503   12461 out.go:285] * 
	* 
	W1018 17:15:36.423823   12461 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 17:15:36.426594   12461 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-164474 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.36s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.89s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1018 17:15:32.683052    4320 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1018 17:15:32.685893    4320 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1018 17:15:32.685925    4320 kapi.go:107] duration metric: took 4.803152ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.81382ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-164474 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-164474 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [c5680947-6833-4b93-b1cf-85b17cfcf986] Pending
helpers_test.go:352: "task-pv-pod" [c5680947-6833-4b93-b1cf-85b17cfcf986] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [c5680947-6833-4b93-b1cf-85b17cfcf986] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003672169s
addons_test.go:572: (dbg) Run:  kubectl --context addons-164474 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-164474 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-164474 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-164474 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-164474 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-164474 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-164474 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [b9f81bd7-0754-4ba1-a332-77af31585c97] Pending
helpers_test.go:352: "task-pv-pod-restore" [b9f81bd7-0754-4ba1-a332-77af31585c97] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [b9f81bd7-0754-4ba1-a332-77af31585c97] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00341467s
addons_test.go:614: (dbg) Run:  kubectl --context addons-164474 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-164474 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-164474 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-164474 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-164474 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (249.193264ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 17:16:19.099225   13463 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:16:19.099460   13463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:16:19.099473   13463 out.go:374] Setting ErrFile to fd 2...
	I1018 17:16:19.099478   13463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:16:19.099734   13463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:16:19.100003   13463 mustload.go:65] Loading cluster: addons-164474
	I1018 17:16:19.100359   13463 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:16:19.100376   13463 addons.go:606] checking whether the cluster is paused
	I1018 17:16:19.100487   13463 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:16:19.100505   13463 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:16:19.101028   13463 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:16:19.118741   13463 ssh_runner.go:195] Run: systemctl --version
	I1018 17:16:19.118796   13463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:16:19.137885   13463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:16:19.239460   13463 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:16:19.239554   13463 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:16:19.268388   13463 cri.go:89] found id: "968c95a146a7fe08d5189eee29bd9582f2894b5d0f04e78e794058e86e194f17"
	I1018 17:16:19.268416   13463 cri.go:89] found id: "7657f768a8a9ac41bcd1f5e7a196579a7dcf31f08b605bba0bd11acb46369892"
	I1018 17:16:19.268421   13463 cri.go:89] found id: "a4777ba56bbe130ce2d0759f981f7a5a7a81a6f76b26c9602759d75786f28075"
	I1018 17:16:19.268425   13463 cri.go:89] found id: "cdf72845ca4f04b7f38a96e8e2bc2c5bff55db097097fe86438572754061e4d1"
	I1018 17:16:19.268428   13463 cri.go:89] found id: "cbf41849e12c028d15eee86acc3c0fcaf5d31af35d656b7935de4a45730fb182"
	I1018 17:16:19.268432   13463 cri.go:89] found id: "cd1c762de0b5dd26a00d004eb60c3a0356920d2d898bf210120e83239de379d3"
	I1018 17:16:19.268435   13463 cri.go:89] found id: "f97b941babec4dfdf104ffdbe7459e396a64a17a6edfa11989d9170c5b5365e2"
	I1018 17:16:19.268438   13463 cri.go:89] found id: "c763b99ed4a70e785446e888023cdfabc0fdeb6e7dcb1a84844d98d22b841291"
	I1018 17:16:19.268441   13463 cri.go:89] found id: "26297b4bb562054967554961013b4aecf4a819a64b9615266425ddb33797d349"
	I1018 17:16:19.268450   13463 cri.go:89] found id: "14f2f76f82dc964b8b157e088100913e80feaa2be642ecc8b72fea78bd2a0ed1"
	I1018 17:16:19.268456   13463 cri.go:89] found id: "901f9bb2898fac636a6903ad516f9b140591198721e4e2bfd30c9ab9155a01ed"
	I1018 17:16:19.268459   13463 cri.go:89] found id: "8a59f8ac6ef2822e7088c9cd1a68272c147739f96eaf27abf4a85d43c140b0ea"
	I1018 17:16:19.268463   13463 cri.go:89] found id: "ce684ca523f08f4af3d1134e239085b099cc9e2cd0f8679963ba4f111fcf7567"
	I1018 17:16:19.268466   13463 cri.go:89] found id: "f402fe3063f55e7003a2aaac453c55c6b2139f8fa75d1a062b447a4a5a8f278c"
	I1018 17:16:19.268469   13463 cri.go:89] found id: "6865806b912ab6d902d766fb60959288c01cc7c01f0f6d41ece13a1484e43f45"
	I1018 17:16:19.268474   13463 cri.go:89] found id: "8d07fa8a1c45fb2b7f3f20b332023c1b057391cd1e4435eb47db001464e9ada7"
	I1018 17:16:19.268484   13463 cri.go:89] found id: "ece6fd8e36b7414b9ea8a96fa9d85543498f89e17705fa3bc262b1570f482b24"
	I1018 17:16:19.268488   13463 cri.go:89] found id: "d12b84a60111629c5268442b96bd59e440c9aec3f86f326d9528b07daa476596"
	I1018 17:16:19.268491   13463 cri.go:89] found id: "d87115dc1b972147e18ebd00d21f7d791e5831c69fbef5f5e25fb2fade668bf7"
	I1018 17:16:19.268494   13463 cri.go:89] found id: "07f016f168b62771c5ab60ab8215041fcead58a20ef1da5932bcb8d6da58077f"
	I1018 17:16:19.268499   13463 cri.go:89] found id: "f085bccd65219cd8bb8d59ffcc8bee71589bead44d17e3e6fe5269fe6781f2f3"
	I1018 17:16:19.268502   13463 cri.go:89] found id: "4a1b92f8cd14a17c1e2790e1ca03a5608e43fb0ee84dba04aae2757215b8f043"
	I1018 17:16:19.268505   13463 cri.go:89] found id: "246aa3ddddf57033502d5fd5679ade1ae4e79cefdfdc7645841ea4f17a3e0313"
	I1018 17:16:19.268508   13463 cri.go:89] found id: ""
	I1018 17:16:19.268560   13463 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 17:16:19.283890   13463 out.go:203] 
	W1018 17:16:19.286867   13463 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:16:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:16:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 17:16:19.286896   13463 out.go:285] * 
	* 
	W1018 17:16:19.291245   13463 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 17:16:19.294173   13463 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-164474 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-164474 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-164474 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (270.627355ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 17:16:19.364922   13507 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:16:19.365219   13507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:16:19.365234   13507 out.go:374] Setting ErrFile to fd 2...
	I1018 17:16:19.365240   13507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:16:19.365598   13507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:16:19.365970   13507 mustload.go:65] Loading cluster: addons-164474
	I1018 17:16:19.366476   13507 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:16:19.366498   13507 addons.go:606] checking whether the cluster is paused
	I1018 17:16:19.366659   13507 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:16:19.366695   13507 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:16:19.367279   13507 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:16:19.389350   13507 ssh_runner.go:195] Run: systemctl --version
	I1018 17:16:19.389414   13507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:16:19.406517   13507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:16:19.511686   13507 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:16:19.511772   13507 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:16:19.540066   13507 cri.go:89] found id: "968c95a146a7fe08d5189eee29bd9582f2894b5d0f04e78e794058e86e194f17"
	I1018 17:16:19.540094   13507 cri.go:89] found id: "7657f768a8a9ac41bcd1f5e7a196579a7dcf31f08b605bba0bd11acb46369892"
	I1018 17:16:19.540099   13507 cri.go:89] found id: "a4777ba56bbe130ce2d0759f981f7a5a7a81a6f76b26c9602759d75786f28075"
	I1018 17:16:19.540102   13507 cri.go:89] found id: "cdf72845ca4f04b7f38a96e8e2bc2c5bff55db097097fe86438572754061e4d1"
	I1018 17:16:19.540106   13507 cri.go:89] found id: "cbf41849e12c028d15eee86acc3c0fcaf5d31af35d656b7935de4a45730fb182"
	I1018 17:16:19.540109   13507 cri.go:89] found id: "cd1c762de0b5dd26a00d004eb60c3a0356920d2d898bf210120e83239de379d3"
	I1018 17:16:19.540112   13507 cri.go:89] found id: "f97b941babec4dfdf104ffdbe7459e396a64a17a6edfa11989d9170c5b5365e2"
	I1018 17:16:19.540116   13507 cri.go:89] found id: "c763b99ed4a70e785446e888023cdfabc0fdeb6e7dcb1a84844d98d22b841291"
	I1018 17:16:19.540119   13507 cri.go:89] found id: "26297b4bb562054967554961013b4aecf4a819a64b9615266425ddb33797d349"
	I1018 17:16:19.540126   13507 cri.go:89] found id: "14f2f76f82dc964b8b157e088100913e80feaa2be642ecc8b72fea78bd2a0ed1"
	I1018 17:16:19.540130   13507 cri.go:89] found id: "901f9bb2898fac636a6903ad516f9b140591198721e4e2bfd30c9ab9155a01ed"
	I1018 17:16:19.540133   13507 cri.go:89] found id: "8a59f8ac6ef2822e7088c9cd1a68272c147739f96eaf27abf4a85d43c140b0ea"
	I1018 17:16:19.540137   13507 cri.go:89] found id: "ce684ca523f08f4af3d1134e239085b099cc9e2cd0f8679963ba4f111fcf7567"
	I1018 17:16:19.540140   13507 cri.go:89] found id: "f402fe3063f55e7003a2aaac453c55c6b2139f8fa75d1a062b447a4a5a8f278c"
	I1018 17:16:19.540143   13507 cri.go:89] found id: "6865806b912ab6d902d766fb60959288c01cc7c01f0f6d41ece13a1484e43f45"
	I1018 17:16:19.540148   13507 cri.go:89] found id: "8d07fa8a1c45fb2b7f3f20b332023c1b057391cd1e4435eb47db001464e9ada7"
	I1018 17:16:19.540155   13507 cri.go:89] found id: "ece6fd8e36b7414b9ea8a96fa9d85543498f89e17705fa3bc262b1570f482b24"
	I1018 17:16:19.540159   13507 cri.go:89] found id: "d12b84a60111629c5268442b96bd59e440c9aec3f86f326d9528b07daa476596"
	I1018 17:16:19.540162   13507 cri.go:89] found id: "d87115dc1b972147e18ebd00d21f7d791e5831c69fbef5f5e25fb2fade668bf7"
	I1018 17:16:19.540165   13507 cri.go:89] found id: "07f016f168b62771c5ab60ab8215041fcead58a20ef1da5932bcb8d6da58077f"
	I1018 17:16:19.540169   13507 cri.go:89] found id: "f085bccd65219cd8bb8d59ffcc8bee71589bead44d17e3e6fe5269fe6781f2f3"
	I1018 17:16:19.540172   13507 cri.go:89] found id: "4a1b92f8cd14a17c1e2790e1ca03a5608e43fb0ee84dba04aae2757215b8f043"
	I1018 17:16:19.540175   13507 cri.go:89] found id: "246aa3ddddf57033502d5fd5679ade1ae4e79cefdfdc7645841ea4f17a3e0313"
	I1018 17:16:19.540178   13507 cri.go:89] found id: ""
	I1018 17:16:19.540229   13507 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 17:16:19.555903   13507 out.go:203] 
	W1018 17:16:19.558724   13507 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:16:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:16:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 17:16:19.558746   13507 out.go:285] * 
	* 
	W1018 17:16:19.563073   13507 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 17:16:19.565984   13507 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-164474 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (46.89s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-164474 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-164474 --alsologtostderr -v=1: exit status 11 (262.077244ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 17:15:09.475641   11251 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:15:09.475901   11251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:15:09.476404   11251 out.go:374] Setting ErrFile to fd 2...
	I1018 17:15:09.476427   11251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:15:09.476699   11251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:15:09.477102   11251 mustload.go:65] Loading cluster: addons-164474
	I1018 17:15:09.477641   11251 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:15:09.477698   11251 addons.go:606] checking whether the cluster is paused
	I1018 17:15:09.477854   11251 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:15:09.477894   11251 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:15:09.478539   11251 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:15:09.496361   11251 ssh_runner.go:195] Run: systemctl --version
	I1018 17:15:09.496410   11251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:15:09.520474   11251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:15:09.623425   11251 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:15:09.623515   11251 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:15:09.652002   11251 cri.go:89] found id: "968c95a146a7fe08d5189eee29bd9582f2894b5d0f04e78e794058e86e194f17"
	I1018 17:15:09.652024   11251 cri.go:89] found id: "7657f768a8a9ac41bcd1f5e7a196579a7dcf31f08b605bba0bd11acb46369892"
	I1018 17:15:09.652029   11251 cri.go:89] found id: "a4777ba56bbe130ce2d0759f981f7a5a7a81a6f76b26c9602759d75786f28075"
	I1018 17:15:09.652033   11251 cri.go:89] found id: "cdf72845ca4f04b7f38a96e8e2bc2c5bff55db097097fe86438572754061e4d1"
	I1018 17:15:09.652038   11251 cri.go:89] found id: "cbf41849e12c028d15eee86acc3c0fcaf5d31af35d656b7935de4a45730fb182"
	I1018 17:15:09.652042   11251 cri.go:89] found id: "cd1c762de0b5dd26a00d004eb60c3a0356920d2d898bf210120e83239de379d3"
	I1018 17:15:09.652046   11251 cri.go:89] found id: "f97b941babec4dfdf104ffdbe7459e396a64a17a6edfa11989d9170c5b5365e2"
	I1018 17:15:09.652050   11251 cri.go:89] found id: "c763b99ed4a70e785446e888023cdfabc0fdeb6e7dcb1a84844d98d22b841291"
	I1018 17:15:09.652053   11251 cri.go:89] found id: "26297b4bb562054967554961013b4aecf4a819a64b9615266425ddb33797d349"
	I1018 17:15:09.652060   11251 cri.go:89] found id: "14f2f76f82dc964b8b157e088100913e80feaa2be642ecc8b72fea78bd2a0ed1"
	I1018 17:15:09.652064   11251 cri.go:89] found id: "901f9bb2898fac636a6903ad516f9b140591198721e4e2bfd30c9ab9155a01ed"
	I1018 17:15:09.652068   11251 cri.go:89] found id: "8a59f8ac6ef2822e7088c9cd1a68272c147739f96eaf27abf4a85d43c140b0ea"
	I1018 17:15:09.652072   11251 cri.go:89] found id: "ce684ca523f08f4af3d1134e239085b099cc9e2cd0f8679963ba4f111fcf7567"
	I1018 17:15:09.652076   11251 cri.go:89] found id: "f402fe3063f55e7003a2aaac453c55c6b2139f8fa75d1a062b447a4a5a8f278c"
	I1018 17:15:09.652079   11251 cri.go:89] found id: "6865806b912ab6d902d766fb60959288c01cc7c01f0f6d41ece13a1484e43f45"
	I1018 17:15:09.652085   11251 cri.go:89] found id: "8d07fa8a1c45fb2b7f3f20b332023c1b057391cd1e4435eb47db001464e9ada7"
	I1018 17:15:09.652091   11251 cri.go:89] found id: "ece6fd8e36b7414b9ea8a96fa9d85543498f89e17705fa3bc262b1570f482b24"
	I1018 17:15:09.652096   11251 cri.go:89] found id: "d12b84a60111629c5268442b96bd59e440c9aec3f86f326d9528b07daa476596"
	I1018 17:15:09.652100   11251 cri.go:89] found id: "d87115dc1b972147e18ebd00d21f7d791e5831c69fbef5f5e25fb2fade668bf7"
	I1018 17:15:09.652104   11251 cri.go:89] found id: "07f016f168b62771c5ab60ab8215041fcead58a20ef1da5932bcb8d6da58077f"
	I1018 17:15:09.652108   11251 cri.go:89] found id: "f085bccd65219cd8bb8d59ffcc8bee71589bead44d17e3e6fe5269fe6781f2f3"
	I1018 17:15:09.652112   11251 cri.go:89] found id: "4a1b92f8cd14a17c1e2790e1ca03a5608e43fb0ee84dba04aae2757215b8f043"
	I1018 17:15:09.652115   11251 cri.go:89] found id: "246aa3ddddf57033502d5fd5679ade1ae4e79cefdfdc7645841ea4f17a3e0313"
	I1018 17:15:09.652118   11251 cri.go:89] found id: ""
	I1018 17:15:09.652178   11251 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 17:15:09.667246   11251 out.go:203] 
	W1018 17:15:09.670009   11251 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:15:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:15:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 17:15:09.670052   11251 out.go:285] * 
	* 
	W1018 17:15:09.674293   11251 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 17:15:09.677163   11251 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-164474 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-164474
helpers_test.go:243: (dbg) docker inspect addons-164474:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "31000ccc16f2da54474476b9a5eeb51132587beec766c8579e875c01b1c476ea",
	        "Created": "2025-10-18T17:12:36.608114275Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5474,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T17:12:36.693681146Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/31000ccc16f2da54474476b9a5eeb51132587beec766c8579e875c01b1c476ea/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/31000ccc16f2da54474476b9a5eeb51132587beec766c8579e875c01b1c476ea/hostname",
	        "HostsPath": "/var/lib/docker/containers/31000ccc16f2da54474476b9a5eeb51132587beec766c8579e875c01b1c476ea/hosts",
	        "LogPath": "/var/lib/docker/containers/31000ccc16f2da54474476b9a5eeb51132587beec766c8579e875c01b1c476ea/31000ccc16f2da54474476b9a5eeb51132587beec766c8579e875c01b1c476ea-json.log",
	        "Name": "/addons-164474",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-164474:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-164474",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "31000ccc16f2da54474476b9a5eeb51132587beec766c8579e875c01b1c476ea",
	                "LowerDir": "/var/lib/docker/overlay2/60c2b458f4fb11ddd0cefd6c98eefc86dd6f597e5e6af5b4ba683fc484a932fd-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/60c2b458f4fb11ddd0cefd6c98eefc86dd6f597e5e6af5b4ba683fc484a932fd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/60c2b458f4fb11ddd0cefd6c98eefc86dd6f597e5e6af5b4ba683fc484a932fd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/60c2b458f4fb11ddd0cefd6c98eefc86dd6f597e5e6af5b4ba683fc484a932fd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-164474",
	                "Source": "/var/lib/docker/volumes/addons-164474/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-164474",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-164474",
	                "name.minikube.sigs.k8s.io": "addons-164474",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2f9bac4ff314be3da3d5ff3000a087f1d269302b36a1df4ea82d00b0e76dae49",
	            "SandboxKey": "/var/run/docker/netns/2f9bac4ff314",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-164474": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:f8:ad:9e:06:ff",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "281458d9014b24585c5cceab2454c34e1b72788eb05df25f412bc3f15189db83",
	                    "EndpointID": "da4d366b9ded281de67a1839eb602323deefe10fd96fccfdb22aeeb48db46628",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-164474",
	                        "31000ccc16f2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-164474 -n addons-164474
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-164474 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-164474 logs -n 25: (1.478017741s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-158495 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-158495   │ jenkins │ v1.37.0 │ 18 Oct 25 17:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 18 Oct 25 17:12 UTC │ 18 Oct 25 17:12 UTC │
	│ delete  │ -p download-only-158495                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-158495   │ jenkins │ v1.37.0 │ 18 Oct 25 17:12 UTC │ 18 Oct 25 17:12 UTC │
	│ start   │ -o=json --download-only -p download-only-339428 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-339428   │ jenkins │ v1.37.0 │ 18 Oct 25 17:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 18 Oct 25 17:12 UTC │ 18 Oct 25 17:12 UTC │
	│ delete  │ -p download-only-339428                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-339428   │ jenkins │ v1.37.0 │ 18 Oct 25 17:12 UTC │ 18 Oct 25 17:12 UTC │
	│ delete  │ -p download-only-158495                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-158495   │ jenkins │ v1.37.0 │ 18 Oct 25 17:12 UTC │ 18 Oct 25 17:12 UTC │
	│ delete  │ -p download-only-339428                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-339428   │ jenkins │ v1.37.0 │ 18 Oct 25 17:12 UTC │ 18 Oct 25 17:12 UTC │
	│ start   │ --download-only -p download-docker-146837 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-146837 │ jenkins │ v1.37.0 │ 18 Oct 25 17:12 UTC │                     │
	│ delete  │ -p download-docker-146837                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-146837 │ jenkins │ v1.37.0 │ 18 Oct 25 17:12 UTC │ 18 Oct 25 17:12 UTC │
	│ start   │ --download-only -p binary-mirror-644672 --alsologtostderr --binary-mirror http://127.0.0.1:44133 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-644672   │ jenkins │ v1.37.0 │ 18 Oct 25 17:12 UTC │                     │
	│ delete  │ -p binary-mirror-644672                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-644672   │ jenkins │ v1.37.0 │ 18 Oct 25 17:12 UTC │ 18 Oct 25 17:12 UTC │
	│ addons  │ enable dashboard -p addons-164474                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:12 UTC │                     │
	│ addons  │ disable dashboard -p addons-164474                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:12 UTC │                     │
	│ start   │ -p addons-164474 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:12 UTC │ 18 Oct 25 17:14 UTC │
	│ addons  │ addons-164474 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:14 UTC │                     │
	│ addons  │ addons-164474 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:15 UTC │                     │
	│ addons  │ enable headlamp -p addons-164474 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-164474          │ jenkins │ v1.37.0 │ 18 Oct 25 17:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 17:12:09
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 17:12:09.621667    5077 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:12:09.621843    5077 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:12:09.621872    5077 out.go:374] Setting ErrFile to fd 2...
	I1018 17:12:09.621894    5077 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:12:09.622181    5077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:12:09.622687    5077 out.go:368] Setting JSON to false
	I1018 17:12:09.623453    5077 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3279,"bootTime":1760804251,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 17:12:09.623547    5077 start.go:141] virtualization:  
	I1018 17:12:09.627238    5077 out.go:179] * [addons-164474] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 17:12:09.630379    5077 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 17:12:09.630448    5077 notify.go:220] Checking for updates...
	I1018 17:12:09.636244    5077 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 17:12:09.639243    5077 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:12:09.642208    5077 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 17:12:09.645082    5077 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 17:12:09.648029    5077 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 17:12:09.651192    5077 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 17:12:09.683084    5077 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 17:12:09.683211    5077 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:12:09.744501    5077 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-18 17:12:09.734318466 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:12:09.744612    5077 docker.go:318] overlay module found
	I1018 17:12:09.747678    5077 out.go:179] * Using the docker driver based on user configuration
	I1018 17:12:09.750516    5077 start.go:305] selected driver: docker
	I1018 17:12:09.750539    5077 start.go:925] validating driver "docker" against <nil>
	I1018 17:12:09.750553    5077 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 17:12:09.751286    5077 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:12:09.806318    5077 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-18 17:12:09.797250313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:12:09.806473    5077 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 17:12:09.806696    5077 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:12:09.809565    5077 out.go:179] * Using Docker driver with root privileges
	I1018 17:12:09.812466    5077 cni.go:84] Creating CNI manager for ""
	I1018 17:12:09.812540    5077 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 17:12:09.812554    5077 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 17:12:09.812633    5077 start.go:349] cluster config:
	{Name:addons-164474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-164474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1018 17:12:09.817507    5077 out.go:179] * Starting "addons-164474" primary control-plane node in "addons-164474" cluster
	I1018 17:12:09.820411    5077 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:12:09.823474    5077 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:12:09.826285    5077 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:12:09.826341    5077 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 17:12:09.826351    5077 cache.go:58] Caching tarball of preloaded images
	I1018 17:12:09.826440    5077 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:12:09.826450    5077 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:12:09.826794    5077 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/config.json ...
	I1018 17:12:09.826815    5077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/config.json: {Name:mk3348a25a1467de46c94788d07a2cffa213827d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:12:09.826971    5077 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:12:09.843152    5077 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 17:12:09.843268    5077 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 17:12:09.843286    5077 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 17:12:09.843291    5077 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 17:12:09.843298    5077 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 17:12:09.843302    5077 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1018 17:12:27.269089    5077 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1018 17:12:27.269127    5077 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:12:27.269155    5077 start.go:360] acquireMachinesLock for addons-164474: {Name:mkab7365bdd9150f769d9384f833a7496379677e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:12:27.269263    5077 start.go:364] duration metric: took 89.675µs to acquireMachinesLock for "addons-164474"
	I1018 17:12:27.269294    5077 start.go:93] Provisioning new machine with config: &{Name:addons-164474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-164474 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:12:27.269387    5077 start.go:125] createHost starting for "" (driver="docker")
	I1018 17:12:27.272894    5077 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1018 17:12:27.273133    5077 start.go:159] libmachine.API.Create for "addons-164474" (driver="docker")
	I1018 17:12:27.273182    5077 client.go:168] LocalClient.Create starting
	I1018 17:12:27.273305    5077 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem
	I1018 17:12:28.988090    5077 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem
	I1018 17:12:29.713605    5077 cli_runner.go:164] Run: docker network inspect addons-164474 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 17:12:29.729251    5077 cli_runner.go:211] docker network inspect addons-164474 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 17:12:29.729339    5077 network_create.go:284] running [docker network inspect addons-164474] to gather additional debugging logs...
	I1018 17:12:29.729359    5077 cli_runner.go:164] Run: docker network inspect addons-164474
	W1018 17:12:29.744927    5077 cli_runner.go:211] docker network inspect addons-164474 returned with exit code 1
	I1018 17:12:29.745021    5077 network_create.go:287] error running [docker network inspect addons-164474]: docker network inspect addons-164474: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-164474 not found
	I1018 17:12:29.745035    5077 network_create.go:289] output of [docker network inspect addons-164474]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-164474 not found
	
	** /stderr **
	I1018 17:12:29.745138    5077 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:12:29.761282    5077 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001997fe0}
	I1018 17:12:29.761334    5077 network_create.go:124] attempt to create docker network addons-164474 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1018 17:12:29.761391    5077 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-164474 addons-164474
	I1018 17:12:29.820225    5077 network_create.go:108] docker network addons-164474 192.168.49.0/24 created
	I1018 17:12:29.820252    5077 kic.go:121] calculated static IP "192.168.49.2" for the "addons-164474" container
	I1018 17:12:29.820325    5077 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 17:12:29.835209    5077 cli_runner.go:164] Run: docker volume create addons-164474 --label name.minikube.sigs.k8s.io=addons-164474 --label created_by.minikube.sigs.k8s.io=true
	I1018 17:12:29.853345    5077 oci.go:103] Successfully created a docker volume addons-164474
	I1018 17:12:29.853436    5077 cli_runner.go:164] Run: docker run --rm --name addons-164474-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-164474 --entrypoint /usr/bin/test -v addons-164474:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 17:12:32.115175    5077 cli_runner.go:217] Completed: docker run --rm --name addons-164474-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-164474 --entrypoint /usr/bin/test -v addons-164474:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (2.261703898s)
	I1018 17:12:32.115222    5077 oci.go:107] Successfully prepared a docker volume addons-164474
	I1018 17:12:32.115245    5077 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:12:32.115263    5077 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 17:12:32.115328    5077 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-164474:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 17:12:36.541425    5077 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-164474:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.42605261s)
	I1018 17:12:36.541456    5077 kic.go:203] duration metric: took 4.426190507s to extract preloaded images to volume ...
	W1018 17:12:36.541601    5077 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 17:12:36.541715    5077 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 17:12:36.593589    5077 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-164474 --name addons-164474 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-164474 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-164474 --network addons-164474 --ip 192.168.49.2 --volume addons-164474:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 17:12:36.937166    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Running}}
	I1018 17:12:36.961301    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:12:36.985066    5077 cli_runner.go:164] Run: docker exec addons-164474 stat /var/lib/dpkg/alternatives/iptables
	I1018 17:12:37.040440    5077 oci.go:144] the created container "addons-164474" has a running status.
	I1018 17:12:37.040471    5077 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa...
	I1018 17:12:38.038559    5077 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 17:12:38.070664    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:12:38.090555    5077 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 17:12:38.090581    5077 kic_runner.go:114] Args: [docker exec --privileged addons-164474 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 17:12:38.132101    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:12:38.149135    5077 machine.go:93] provisionDockerMachine start ...
	I1018 17:12:38.149238    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:12:38.166517    5077 main.go:141] libmachine: Using SSH client type: native
	I1018 17:12:38.166841    5077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 17:12:38.166856    5077 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:12:38.167524    5077 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 17:12:41.316424    5077 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-164474
	
	I1018 17:12:41.316447    5077 ubuntu.go:182] provisioning hostname "addons-164474"
	I1018 17:12:41.316511    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:12:41.333766    5077 main.go:141] libmachine: Using SSH client type: native
	I1018 17:12:41.334068    5077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 17:12:41.334085    5077 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-164474 && echo "addons-164474" | sudo tee /etc/hostname
	I1018 17:12:41.489500    5077 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-164474
	
	I1018 17:12:41.489571    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:12:41.507243    5077 main.go:141] libmachine: Using SSH client type: native
	I1018 17:12:41.507543    5077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 17:12:41.507558    5077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-164474' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-164474/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-164474' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:12:41.652846    5077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:12:41.652873    5077 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:12:41.652892    5077 ubuntu.go:190] setting up certificates
	I1018 17:12:41.652901    5077 provision.go:84] configureAuth start
	I1018 17:12:41.652984    5077 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-164474
	I1018 17:12:41.672808    5077 provision.go:143] copyHostCerts
	I1018 17:12:41.672900    5077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:12:41.673043    5077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:12:41.673114    5077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:12:41.673164    5077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.addons-164474 san=[127.0.0.1 192.168.49.2 addons-164474 localhost minikube]
	I1018 17:12:42.112564    5077 provision.go:177] copyRemoteCerts
	I1018 17:12:42.112641    5077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:12:42.112690    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:12:42.136605    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:12:42.249792    5077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:12:42.268559    5077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 17:12:42.287022    5077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:12:42.306486    5077 provision.go:87] duration metric: took 653.51536ms to configureAuth
	I1018 17:12:42.306561    5077 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:12:42.306766    5077 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:12:42.306879    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:12:42.324977    5077 main.go:141] libmachine: Using SSH client type: native
	I1018 17:12:42.325296    5077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1018 17:12:42.325316    5077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:12:42.577432    5077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:12:42.577454    5077 machine.go:96] duration metric: took 4.428298838s to provisionDockerMachine
	I1018 17:12:42.577464    5077 client.go:171] duration metric: took 15.304270688s to LocalClient.Create
	I1018 17:12:42.577477    5077 start.go:167] duration metric: took 15.304344831s to libmachine.API.Create "addons-164474"
	I1018 17:12:42.577485    5077 start.go:293] postStartSetup for "addons-164474" (driver="docker")
	I1018 17:12:42.577495    5077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:12:42.577560    5077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:12:42.577608    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:12:42.596842    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:12:42.700607    5077 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:12:42.703748    5077 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:12:42.703776    5077 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:12:42.703787    5077 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:12:42.703852    5077 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:12:42.703880    5077 start.go:296] duration metric: took 126.38925ms for postStartSetup
	I1018 17:12:42.704184    5077 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-164474
	I1018 17:12:42.720667    5077 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/config.json ...
	I1018 17:12:42.720977    5077 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:12:42.721026    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:12:42.737628    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:12:42.837832    5077 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:12:42.842600    5077 start.go:128] duration metric: took 15.573199139s to createHost
	I1018 17:12:42.842624    5077 start.go:83] releasing machines lock for "addons-164474", held for 15.57334772s
	I1018 17:12:42.842693    5077 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-164474
	I1018 17:12:42.859797    5077 ssh_runner.go:195] Run: cat /version.json
	I1018 17:12:42.859856    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:12:42.860106    5077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:12:42.860166    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:12:42.879010    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:12:42.897077    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:12:43.072311    5077 ssh_runner.go:195] Run: systemctl --version
	I1018 17:12:43.079106    5077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:12:43.115024    5077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:12:43.119511    5077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:12:43.119625    5077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:12:43.149819    5077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 17:12:43.149837    5077 start.go:495] detecting cgroup driver to use...
	I1018 17:12:43.149885    5077 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:12:43.149943    5077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:12:43.167266    5077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:12:43.180673    5077 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:12:43.180792    5077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:12:43.198285    5077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:12:43.216792    5077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:12:43.328468    5077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:12:43.459500    5077 docker.go:234] disabling docker service ...
	I1018 17:12:43.459573    5077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:12:43.479911    5077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:12:43.493178    5077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:12:43.619470    5077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:12:43.737101    5077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:12:43.749643    5077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:12:43.763411    5077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:12:43.763521    5077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:12:43.772425    5077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:12:43.772494    5077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:12:43.781476    5077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:12:43.790204    5077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:12:43.798849    5077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:12:43.806861    5077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:12:43.815102    5077 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:12:43.827722    5077 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:12:43.837302    5077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:12:43.845016    5077 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1018 17:12:43.845105    5077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1018 17:12:43.859099    5077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:12:43.866681    5077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:12:43.979821    5077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:12:44.105409    5077 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:12:44.105509    5077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:12:44.109267    5077 start.go:563] Will wait 60s for crictl version
	I1018 17:12:44.109350    5077 ssh_runner.go:195] Run: which crictl
	I1018 17:12:44.112915    5077 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:12:44.136251    5077 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:12:44.136403    5077 ssh_runner.go:195] Run: crio --version
	I1018 17:12:44.165110    5077 ssh_runner.go:195] Run: crio --version
	I1018 17:12:44.200662    5077 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:12:44.203565    5077 cli_runner.go:164] Run: docker network inspect addons-164474 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:12:44.219397    5077 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:12:44.223299    5077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:12:44.233896    5077 kubeadm.go:883] updating cluster {Name:addons-164474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-164474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 17:12:44.234014    5077 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:12:44.234076    5077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 17:12:44.268341    5077 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 17:12:44.268366    5077 crio.go:433] Images already preloaded, skipping extraction
	I1018 17:12:44.268423    5077 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 17:12:44.293173    5077 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 17:12:44.293196    5077 cache_images.go:85] Images are preloaded, skipping loading
	I1018 17:12:44.293203    5077 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 17:12:44.293285    5077 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-164474 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-164474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:12:44.293367    5077 ssh_runner.go:195] Run: crio config
	I1018 17:12:44.354410    5077 cni.go:84] Creating CNI manager for ""
	I1018 17:12:44.354490    5077 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 17:12:44.354519    5077 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 17:12:44.354573    5077 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-164474 NodeName:addons-164474 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 17:12:44.354735    5077 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-164474"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 17:12:44.354833    5077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:12:44.362591    5077 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:12:44.362660    5077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 17:12:44.370387    5077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 17:12:44.383292    5077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:12:44.395900    5077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1018 17:12:44.408204    5077 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 17:12:44.411662    5077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:12:44.420965    5077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:12:44.534582    5077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:12:44.550424    5077 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474 for IP: 192.168.49.2
	I1018 17:12:44.550452    5077 certs.go:195] generating shared ca certs ...
	I1018 17:12:44.550468    5077 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:12:44.550669    5077 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:12:45.001586    5077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt ...
	I1018 17:12:45.001620    5077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt: {Name:mkc15b5d821f189f0721cb2e35bd5820e47a127a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:12:45.001848    5077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key ...
	I1018 17:12:45.001865    5077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key: {Name:mk8217c17a1e8278b02fa13c181862df662fbda0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:12:45.001956    5077 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:12:45.441448    5077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt ...
	I1018 17:12:45.441487    5077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt: {Name:mk328c040b1092e413560a324ebe5933d3c0ea7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:12:45.441671    5077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key ...
	I1018 17:12:45.441683    5077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key: {Name:mkcc8d257200bfcc88192f5105245d9327105cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:12:45.441761    5077 certs.go:257] generating profile certs ...
	I1018 17:12:45.441825    5077 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.key
	I1018 17:12:45.441841    5077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt with IP's: []
	I1018 17:12:45.780822    5077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt ...
	I1018 17:12:45.780852    5077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: {Name:mkcd971006954b338595f6ebdf5b64d252e82cdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:12:45.781049    5077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.key ...
	I1018 17:12:45.781063    5077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.key: {Name:mk89db0728ef38a8fa29dc5172d437227dac855b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:12:45.781146    5077 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/apiserver.key.601289e7
	I1018 17:12:45.781166    5077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/apiserver.crt.601289e7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1018 17:12:46.656809    5077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/apiserver.crt.601289e7 ...
	I1018 17:12:46.656847    5077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/apiserver.crt.601289e7: {Name:mkfe5d4799bd0f5c8315bf8173aa80da66216675 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:12:46.657045    5077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/apiserver.key.601289e7 ...
	I1018 17:12:46.657062    5077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/apiserver.key.601289e7: {Name:mkfa22d393ecd94f5140b71eb78e9296782027b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:12:46.657146    5077 certs.go:382] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/apiserver.crt.601289e7 -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/apiserver.crt
	I1018 17:12:46.657238    5077 certs.go:386] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/apiserver.key.601289e7 -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/apiserver.key
	I1018 17:12:46.657296    5077 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/proxy-client.key
	I1018 17:12:46.657317    5077 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/proxy-client.crt with IP's: []
	I1018 17:12:47.154729    5077 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/proxy-client.crt ...
	I1018 17:12:47.154759    5077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/proxy-client.crt: {Name:mk474d7203f62648592ecd8e7d65433d3b3f1580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:12:47.154946    5077 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/proxy-client.key ...
	I1018 17:12:47.154959    5077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/proxy-client.key: {Name:mk637a4d7bd3076882b96bf3ead69e44353cab76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:12:47.155171    5077 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:12:47.155214    5077 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:12:47.155247    5077 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:12:47.155279    5077 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:12:47.155863    5077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:12:47.174366    5077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:12:47.192238    5077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:12:47.209863    5077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:12:47.227449    5077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 17:12:47.244844    5077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 17:12:47.263272    5077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 17:12:47.281667    5077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 17:12:47.298545    5077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:12:47.315668    5077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 17:12:47.328109    5077 ssh_runner.go:195] Run: openssl version
	I1018 17:12:47.334469    5077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:12:47.342732    5077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:12:47.346325    5077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:12:47.346446    5077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:12:47.387214    5077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:12:47.395381    5077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:12:47.398826    5077 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 17:12:47.398870    5077 kubeadm.go:400] StartCluster: {Name:addons-164474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-164474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:12:47.398947    5077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:12:47.399014    5077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:12:47.427922    5077 cri.go:89] found id: ""
	I1018 17:12:47.428019    5077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 17:12:47.436715    5077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 17:12:47.444407    5077 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 17:12:47.444486    5077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 17:12:47.451903    5077 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 17:12:47.451967    5077 kubeadm.go:157] found existing configuration files:
	
	I1018 17:12:47.452024    5077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 17:12:47.459571    5077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 17:12:47.459667    5077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 17:12:47.467138    5077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 17:12:47.474559    5077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 17:12:47.474655    5077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 17:12:47.482257    5077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 17:12:47.490623    5077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 17:12:47.490713    5077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 17:12:47.498656    5077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 17:12:47.507073    5077 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 17:12:47.507165    5077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 17:12:47.514830    5077 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 17:12:47.559238    5077 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 17:12:47.559476    5077 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 17:12:47.586887    5077 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 17:12:47.587002    5077 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 17:12:47.587059    5077 kubeadm.go:318] OS: Linux
	I1018 17:12:47.587135    5077 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 17:12:47.587210    5077 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 17:12:47.587281    5077 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 17:12:47.587356    5077 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 17:12:47.587431    5077 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 17:12:47.587514    5077 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 17:12:47.587586    5077 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 17:12:47.587670    5077 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 17:12:47.587744    5077 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 17:12:47.650112    5077 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 17:12:47.650278    5077 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 17:12:47.650412    5077 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 17:12:47.657668    5077 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 17:12:47.663816    5077 out.go:252]   - Generating certificates and keys ...
	I1018 17:12:47.663983    5077 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 17:12:47.664085    5077 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 17:12:48.489879    5077 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 17:12:48.982468    5077 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 17:12:49.433194    5077 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 17:12:49.948032    5077 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 17:12:50.067272    5077 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 17:12:50.067593    5077 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-164474 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 17:12:50.194869    5077 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 17:12:50.195245    5077 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-164474 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 17:12:51.225657    5077 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 17:12:51.554944    5077 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 17:12:52.101880    5077 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 17:12:52.102245    5077 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 17:12:52.639120    5077 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 17:12:53.350459    5077 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 17:12:53.440832    5077 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 17:12:53.970719    5077 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 17:12:54.244861    5077 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 17:12:54.245685    5077 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 17:12:54.249671    5077 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 17:12:54.253100    5077 out.go:252]   - Booting up control plane ...
	I1018 17:12:54.253208    5077 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 17:12:54.253296    5077 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 17:12:54.254040    5077 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 17:12:54.272180    5077 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 17:12:54.272298    5077 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 17:12:54.279288    5077 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 17:12:54.279551    5077 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 17:12:54.279728    5077 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 17:12:54.403451    5077 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 17:12:54.403605    5077 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 17:12:55.405425    5077 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002320192s
	I1018 17:12:55.408744    5077 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 17:12:55.408869    5077 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1018 17:12:55.409306    5077 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 17:12:55.409403    5077 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 17:12:56.501030    5077 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.091552133s
	I1018 17:12:58.349704    5077 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.940921357s
	I1018 17:13:00.411847    5077 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.002705838s
	I1018 17:13:00.437178    5077 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 17:13:00.465226    5077 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 17:13:00.482839    5077 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 17:13:00.483053    5077 kubeadm.go:318] [mark-control-plane] Marking the node addons-164474 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 17:13:00.500261    5077 kubeadm.go:318] [bootstrap-token] Using token: sagae6.mnuvln85kenb52pb
	I1018 17:13:00.503203    5077 out.go:252]   - Configuring RBAC rules ...
	I1018 17:13:00.503342    5077 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 17:13:00.514958    5077 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 17:13:00.524308    5077 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 17:13:00.529226    5077 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 17:13:00.534700    5077 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 17:13:00.541297    5077 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 17:13:00.821649    5077 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 17:13:01.263711    5077 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 17:13:01.819779    5077 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 17:13:01.820789    5077 kubeadm.go:318] 
	I1018 17:13:01.820865    5077 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 17:13:01.820878    5077 kubeadm.go:318] 
	I1018 17:13:01.820978    5077 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 17:13:01.820989    5077 kubeadm.go:318] 
	I1018 17:13:01.821015    5077 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 17:13:01.821080    5077 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 17:13:01.821142    5077 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 17:13:01.821153    5077 kubeadm.go:318] 
	I1018 17:13:01.821210    5077 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 17:13:01.821220    5077 kubeadm.go:318] 
	I1018 17:13:01.821270    5077 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 17:13:01.821278    5077 kubeadm.go:318] 
	I1018 17:13:01.821332    5077 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 17:13:01.821413    5077 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 17:13:01.821490    5077 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 17:13:01.821499    5077 kubeadm.go:318] 
	I1018 17:13:01.821589    5077 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 17:13:01.821673    5077 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 17:13:01.821681    5077 kubeadm.go:318] 
	I1018 17:13:01.821769    5077 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token sagae6.mnuvln85kenb52pb \
	I1018 17:13:01.821879    5077 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d0244c5bf86cdf97546c6a22045cb6ed9d7ead524d9c98d9ca35da77d5d7a04d \
	I1018 17:13:01.821905    5077 kubeadm.go:318] 	--control-plane 
	I1018 17:13:01.821912    5077 kubeadm.go:318] 
	I1018 17:13:01.822001    5077 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 17:13:01.822011    5077 kubeadm.go:318] 
	I1018 17:13:01.822098    5077 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token sagae6.mnuvln85kenb52pb \
	I1018 17:13:01.822469    5077 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d0244c5bf86cdf97546c6a22045cb6ed9d7ead524d9c98d9ca35da77d5d7a04d 
	I1018 17:13:01.825810    5077 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 17:13:01.826101    5077 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 17:13:01.826228    5077 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 17:13:01.826255    5077 cni.go:84] Creating CNI manager for ""
	I1018 17:13:01.826266    5077 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 17:13:01.831157    5077 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 17:13:01.834000    5077 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 17:13:01.837985    5077 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 17:13:01.838004    5077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 17:13:01.851399    5077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 17:13:02.138247    5077 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 17:13:02.138382    5077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 17:13:02.138478    5077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-164474 minikube.k8s.io/updated_at=2025_10_18T17_13_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404 minikube.k8s.io/name=addons-164474 minikube.k8s.io/primary=true
	I1018 17:13:02.285593    5077 ops.go:34] apiserver oom_adj: -16
	I1018 17:13:02.285711    5077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 17:13:02.786818    5077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 17:13:03.285819    5077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 17:13:03.786647    5077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 17:13:04.285991    5077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 17:13:04.786693    5077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 17:13:05.286574    5077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 17:13:05.786368    5077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 17:13:06.286775    5077 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 17:13:06.382716    5077 kubeadm.go:1113] duration metric: took 4.244381969s to wait for elevateKubeSystemPrivileges
	I1018 17:13:06.382745    5077 kubeadm.go:402] duration metric: took 18.983878488s to StartCluster
	I1018 17:13:06.382762    5077 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:13:06.382882    5077 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:13:06.383219    5077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:13:06.383403    5077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 17:13:06.383428    5077 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:13:06.383664    5077 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:13:06.383703    5077 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 17:13:06.383784    5077 addons.go:69] Setting yakd=true in profile "addons-164474"
	I1018 17:13:06.383803    5077 addons.go:238] Setting addon yakd=true in "addons-164474"
	I1018 17:13:06.383832    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.384294    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.384604    5077 addons.go:69] Setting inspektor-gadget=true in profile "addons-164474"
	I1018 17:13:06.384627    5077 addons.go:238] Setting addon inspektor-gadget=true in "addons-164474"
	I1018 17:13:06.384648    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.385072    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.385387    5077 addons.go:69] Setting metrics-server=true in profile "addons-164474"
	I1018 17:13:06.385410    5077 addons.go:238] Setting addon metrics-server=true in "addons-164474"
	I1018 17:13:06.385433    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.385832    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.385992    5077 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-164474"
	I1018 17:13:06.386026    5077 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-164474"
	I1018 17:13:06.386052    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.386438    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.389587    5077 addons.go:69] Setting cloud-spanner=true in profile "addons-164474"
	I1018 17:13:06.389617    5077 addons.go:238] Setting addon cloud-spanner=true in "addons-164474"
	I1018 17:13:06.389647    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.390067    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.391205    5077 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-164474"
	I1018 17:13:06.391230    5077 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-164474"
	I1018 17:13:06.391265    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.391684    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.395214    5077 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-164474"
	I1018 17:13:06.395291    5077 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-164474"
	I1018 17:13:06.395324    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.395812    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.400747    5077 addons.go:69] Setting default-storageclass=true in profile "addons-164474"
	I1018 17:13:06.400787    5077 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-164474"
	I1018 17:13:06.401238    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.435000    5077 addons.go:69] Setting registry=true in profile "addons-164474"
	I1018 17:13:06.435091    5077 addons.go:238] Setting addon registry=true in "addons-164474"
	I1018 17:13:06.435152    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.435644    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.442188    5077 addons.go:69] Setting gcp-auth=true in profile "addons-164474"
	I1018 17:13:06.442230    5077 mustload.go:65] Loading cluster: addons-164474
	I1018 17:13:06.442436    5077 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:13:06.442706    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.452686    5077 addons.go:69] Setting registry-creds=true in profile "addons-164474"
	I1018 17:13:06.452726    5077 addons.go:238] Setting addon registry-creds=true in "addons-164474"
	I1018 17:13:06.452761    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.453240    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.456063    5077 addons.go:69] Setting ingress=true in profile "addons-164474"
	I1018 17:13:06.456104    5077 addons.go:238] Setting addon ingress=true in "addons-164474"
	I1018 17:13:06.456147    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.456618    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.477305    5077 addons.go:69] Setting ingress-dns=true in profile "addons-164474"
	I1018 17:13:06.477338    5077 addons.go:238] Setting addon ingress-dns=true in "addons-164474"
	I1018 17:13:06.477470    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.478191    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.484906    5077 addons.go:69] Setting storage-provisioner=true in profile "addons-164474"
	I1018 17:13:06.485008    5077 addons.go:238] Setting addon storage-provisioner=true in "addons-164474"
	I1018 17:13:06.485058    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.485539    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.498185    5077 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-164474"
	I1018 17:13:06.498220    5077 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-164474"
	I1018 17:13:06.498321    5077 out.go:179] * Verifying Kubernetes components...
	I1018 17:13:06.498577    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.529110    5077 addons.go:69] Setting volcano=true in profile "addons-164474"
	I1018 17:13:06.529151    5077 addons.go:238] Setting addon volcano=true in "addons-164474"
	I1018 17:13:06.529184    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.529627    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.545299    5077 addons.go:69] Setting volumesnapshots=true in profile "addons-164474"
	I1018 17:13:06.545328    5077 addons.go:238] Setting addon volumesnapshots=true in "addons-164474"
	I1018 17:13:06.545369    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.545878    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.606490    5077 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 17:13:06.619341    5077 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 17:13:06.626233    5077 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 17:13:06.626374    5077 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 17:13:06.626431    5077 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 17:13:06.626554    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.626830    5077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:13:06.627144    5077 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 17:13:06.627164    5077 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 17:13:06.627227    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.640650    5077 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 17:13:06.641273    5077 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 17:13:06.659856    5077 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 17:13:06.660003    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 17:13:06.660423    5077 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 17:13:06.666113    5077 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 17:13:06.666197    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 17:13:06.666357    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.660496    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.681811    5077 addons.go:238] Setting addon default-storageclass=true in "addons-164474"
	I1018 17:13:06.681858    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.682293    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.685853    5077 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 17:13:06.685880    5077 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 17:13:06.685940    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.693472    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.695425    5077 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 17:13:06.698554    5077 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 17:13:06.698618    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 17:13:06.698715    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.711663    5077 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 17:13:06.715302    5077 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 17:13:06.718899    5077 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 17:13:06.721875    5077 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 17:13:06.722895    5077 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-164474"
	I1018 17:13:06.722930    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:06.723327    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:06.755522    5077 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 17:13:06.795307    5077 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 17:13:06.797366    5077 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	W1018 17:13:06.797679    5077 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1018 17:13:06.809044    5077 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 17:13:06.811542    5077 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 17:13:06.811602    5077 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 17:13:06.814777    5077 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 17:13:06.812089    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:06.814738    5077 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 17:13:06.814758    5077 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 17:13:06.820600    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 17:13:06.820718    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.837607    5077 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 17:13:06.837853    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 17:13:06.838043    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.848551    5077 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 17:13:06.848572    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 17:13:06.848635    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.881318    5077 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 17:13:06.881341    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 17:13:06.881402    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.887795    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:06.893440    5077 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 17:13:06.897848    5077 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 17:13:06.897924    5077 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 17:13:06.905134    5077 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 17:13:06.905158    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 17:13:06.905225    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.915167    5077 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 17:13:06.915205    5077 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 17:13:06.915291    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.924676    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:06.932977    5077 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 17:13:06.933001    5077 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 17:13:06.933062    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.933213    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:06.935785    5077 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 17:13:06.938654    5077 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 17:13:06.938677    5077 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 17:13:06.938744    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.961564    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:06.965326    5077 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 17:13:06.972321    5077 out.go:179]   - Using image docker.io/busybox:stable
	I1018 17:13:06.975154    5077 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 17:13:06.975184    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 17:13:06.975244    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:06.980873    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:07.029457    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:07.057634    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:07.059916    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:07.074458    5077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 17:13:07.117275    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:07.124488    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:07.125591    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:07.126281    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:07.127203    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:07.130995    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	W1018 17:13:07.137812    5077 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 17:13:07.137857    5077 retry.go:31] will retry after 339.387014ms: ssh: handshake failed: EOF
	W1018 17:13:07.138870    5077 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 17:13:07.138895    5077 retry.go:31] will retry after 251.728274ms: ssh: handshake failed: EOF
	I1018 17:13:07.269583    5077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:13:07.626672    5077 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 17:13:07.626703    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 17:13:07.697632    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 17:13:07.713029    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 17:13:07.769073    5077 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 17:13:07.769145    5077 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 17:13:07.801553    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 17:13:07.855864    5077 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 17:13:07.855926    5077 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 17:13:07.866547    5077 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:13:07.866613    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 17:13:07.871697    5077 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 17:13:07.871763    5077 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 17:13:07.875145    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 17:13:07.878798    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 17:13:07.905977    5077 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 17:13:07.906039    5077 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 17:13:07.919786    5077 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 17:13:07.919848    5077 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 17:13:07.944876    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 17:13:07.967894    5077 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 17:13:07.967964    5077 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 17:13:07.971638    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 17:13:07.979771    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 17:13:07.990219    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:13:08.004249    5077 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 17:13:08.004336    5077 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 17:13:08.008340    5077 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 17:13:08.008425    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 17:13:08.091828    5077 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 17:13:08.091901    5077 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 17:13:08.110811    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 17:13:08.133680    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 17:13:08.150149    5077 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 17:13:08.150226    5077 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 17:13:08.160806    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 17:13:08.237516    5077 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 17:13:08.237588    5077 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 17:13:08.296330    5077 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 17:13:08.296398    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 17:13:08.300176    5077 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 17:13:08.300242    5077 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 17:13:08.440506    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 17:13:08.446374    5077 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.371882684s)
	I1018 17:13:08.446533    5077 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1018 17:13:08.446461    5077 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.176855084s)
	I1018 17:13:08.448126    5077 node_ready.go:35] waiting up to 6m0s for node "addons-164474" to be "Ready" ...
	I1018 17:13:08.486470    5077 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 17:13:08.486550    5077 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 17:13:08.504452    5077 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 17:13:08.504515    5077 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 17:13:08.705046    5077 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 17:13:08.705115    5077 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 17:13:08.740199    5077 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 17:13:08.740274    5077 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 17:13:08.936478    5077 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 17:13:08.936545    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 17:13:08.951393    5077 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-164474" context rescaled to 1 replicas
	I1018 17:13:08.986569    5077 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 17:13:08.986642    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 17:13:09.200327    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 17:13:09.298167    5077 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 17:13:09.298235    5077 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 17:13:09.474354    5077 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 17:13:09.474420    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 17:13:09.779653    5077 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 17:13:09.779723    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 17:13:09.847795    5077 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 17:13:09.847865    5077 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 17:13:09.986318    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 17:13:10.154202    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.456496614s)
	W1018 17:13:10.490523    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:11.502374    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.789267682s)
	I1018 17:13:11.502473    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.700849438s)
	I1018 17:13:11.502736    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.627529941s)
	I1018 17:13:11.502796    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.62394114s)
	I1018 17:13:11.502833    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.557852447s)
	I1018 17:13:11.693266    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.721551766s)
	I1018 17:13:12.777152    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.797299145s)
	I1018 17:13:12.777318    5077 addons.go:479] Verifying addon ingress=true in "addons-164474"
	I1018 17:13:12.777484    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.666600622s)
	I1018 17:13:12.777521    5077 addons.go:479] Verifying addon metrics-server=true in "addons-164474"
	I1018 17:13:12.777667    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.643925016s)
	I1018 17:13:12.777249    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.786960383s)
	W1018 17:13:12.777730    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:12.777748    5077 retry.go:31] will retry after 149.54801ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:12.777789    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.616909525s)
	I1018 17:13:12.777798    5077 addons.go:479] Verifying addon registry=true in "addons-164474"
	I1018 17:13:12.777923    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.337338894s)
	I1018 17:13:12.778285    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.577858096s)
	W1018 17:13:12.778332    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 17:13:12.778346    5077 retry.go:31] will retry after 275.628822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 17:13:12.781551    5077 out.go:179] * Verifying registry addon...
	I1018 17:13:12.781713    5077 out.go:179] * Verifying ingress addon...
	I1018 17:13:12.781759    5077 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-164474 service yakd-dashboard -n yakd-dashboard
	
	I1018 17:13:12.786023    5077 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 17:13:12.787087    5077 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 17:13:12.790733    5077 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 17:13:12.790806    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:12.791002    5077 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 17:13:12.791036    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:12.927512    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 17:13:12.954859    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:13.054922    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 17:13:13.067376    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.080953665s)
	I1018 17:13:13.067475    5077 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-164474"
	I1018 17:13:13.072401    5077 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 17:13:13.076098    5077 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 17:13:13.085487    5077 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 17:13:13.085562    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:13.291504    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:13.291706    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:13.588650    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:13.792011    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:13.792501    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:13.938086    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.01048657s)
	W1018 17:13:13.938165    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:13.938193    5077 retry.go:31] will retry after 378.055853ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:14.080732    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:14.289943    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:14.290143    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:14.307390    5077 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 17:13:14.307487    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:14.316805    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:13:14.328788    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:14.461115    5077 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 17:13:14.474501    5077 addons.go:238] Setting addon gcp-auth=true in "addons-164474"
	I1018 17:13:14.474544    5077 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:13:14.474980    5077 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:13:14.496874    5077 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 17:13:14.496949    5077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:13:14.520254    5077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:13:14.579382    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:14.791434    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:14.792086    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:15.079727    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:15.290105    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:15.290527    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 17:13:15.451484    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:15.579187    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:15.793236    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:15.793689    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:15.969909    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.914893506s)
	I1018 17:13:15.970005    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.653168794s)
	I1018 17:13:15.970077    5077 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.473174477s)
	W1018 17:13:15.970252    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:15.970283    5077 retry.go:31] will retry after 746.477138ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:15.973368    5077 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 17:13:15.976262    5077 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 17:13:15.979197    5077 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 17:13:15.979220    5077 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 17:13:16.003867    5077 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 17:13:16.003892    5077 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 17:13:16.021844    5077 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 17:13:16.021869    5077 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 17:13:16.036390    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 17:13:16.080807    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:16.291108    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:16.291278    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:16.524622    5077 addons.go:479] Verifying addon gcp-auth=true in "addons-164474"
	I1018 17:13:16.527651    5077 out.go:179] * Verifying gcp-auth addon...
	I1018 17:13:16.531235    5077 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 17:13:16.538792    5077 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 17:13:16.538820    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:16.639237    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:16.717488    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:13:16.790837    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:16.791120    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:17.034118    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:17.079898    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:17.290173    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:17.290361    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:17.451948    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:17.534984    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 17:13:17.549488    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:17.549516    5077 retry.go:31] will retry after 485.971313ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:17.579501    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:17.789437    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:17.791159    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:18.034919    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:18.036103    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:13:18.079581    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:18.295644    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:18.296153    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:18.534729    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:18.580134    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:18.789911    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:18.792707    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:18.822730    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:18.822762    5077 retry.go:31] will retry after 894.899686ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:19.034981    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:19.079793    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:19.290752    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:19.290826    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:19.534543    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:19.579557    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:19.717862    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:13:19.794973    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:19.795539    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:19.952077    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:20.035555    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:20.079589    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:20.289664    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:20.290979    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:20.537872    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 17:13:20.545955    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:20.545985    5077 retry.go:31] will retry after 1.970203596s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:20.579713    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:20.789886    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:20.790631    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:21.034439    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:21.079120    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:21.289128    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:21.290169    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:21.535073    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:21.578949    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:21.790057    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:21.790354    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:22.034281    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:22.078997    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:22.289175    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:22.290572    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:22.451456    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:22.516714    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:13:22.536073    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:22.578694    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:22.791156    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:22.792519    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:23.037026    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:23.079640    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:23.289456    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:23.290514    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:23.317414    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:23.317444    5077 retry.go:31] will retry after 1.464282054s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:23.534977    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:23.579707    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:23.790081    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:23.790950    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:24.035432    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:24.079309    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:24.289354    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:24.290071    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:24.451586    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:24.534568    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:24.579311    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:24.782765    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:13:24.794137    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:24.794444    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:25.034629    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:25.079891    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:25.290411    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:25.291867    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:25.557969    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:25.581599    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 17:13:25.639117    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:25.639148    5077 retry.go:31] will retry after 4.590765672s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:25.788928    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:25.789678    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:26.034518    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:26.079377    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:26.290729    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:26.291120    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:26.452108    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:26.534868    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:26.579807    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:26.789021    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:26.789926    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:27.035264    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:27.082034    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:27.289118    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:27.289906    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:27.534298    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:27.579053    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:27.790040    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:27.790357    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:28.034635    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:28.135119    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:28.290590    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:28.291045    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:28.534737    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:28.579550    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:28.789175    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:28.790821    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:28.951164    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:29.034871    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:29.079640    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:29.289953    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:29.290091    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:29.535123    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:29.579978    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:29.789654    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:29.789774    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:30.035642    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:30.080127    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:30.230292    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:13:30.290740    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:30.290808    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:30.535056    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:30.580131    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:30.791595    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:30.792453    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:30.954319    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:31.034752    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 17:13:31.040212    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:31.040310    5077 retry.go:31] will retry after 7.624700004s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:31.079285    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:31.289433    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:31.290534    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:31.534626    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:31.579462    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:31.790566    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:31.790674    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:32.034612    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:32.082985    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:32.289661    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:32.289904    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:32.534242    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:32.579110    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:32.788828    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:32.790137    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:33.035048    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:33.079952    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:33.289014    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:33.290380    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:33.451341    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:33.534341    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:33.579235    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:33.789742    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:33.789858    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:34.034972    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:34.079606    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:34.290441    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:34.290616    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:34.534314    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:34.579390    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:34.789335    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:34.789927    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:35.034733    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:35.079474    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:35.292466    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:35.292873    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:35.451624    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:35.534282    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:35.579159    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:35.789461    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:35.790669    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:36.034798    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:36.079737    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:36.290196    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:36.290857    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:36.534296    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:36.579265    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:36.789533    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:36.790928    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:37.034561    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:37.079633    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:37.290191    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:37.290423    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:37.451761    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:37.534566    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:37.579135    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:37.788693    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:37.789783    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:38.035049    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:38.080024    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:38.289525    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:38.289665    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:38.534964    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:38.579852    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:38.666073    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:13:38.790345    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:38.792397    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:39.035036    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:39.079701    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:39.291766    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:39.292250    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:39.452186    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	W1018 17:13:39.467858    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:39.467889    5077 retry.go:31] will retry after 13.863401369s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:39.534579    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:39.579509    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:39.790631    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:39.790835    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:40.035701    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:40.079862    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:40.290551    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:40.290903    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:40.534480    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:40.579484    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:40.790038    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:40.790351    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:41.035073    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:41.079549    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:41.290238    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:41.290435    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:41.533961    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:41.579678    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:41.789498    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:41.790729    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:41.951680    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:42.035475    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:42.080477    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:42.290247    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:42.290398    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:42.534686    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:42.579491    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:42.789725    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:42.790737    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:43.034356    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:43.079349    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:43.289151    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:43.290190    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:43.533966    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:43.579662    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:43.790044    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:43.790245    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:44.035086    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:44.079986    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:44.288875    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:44.289950    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:44.451725    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:44.534467    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:44.579569    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:44.790735    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:44.790987    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:45.042951    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:45.083517    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:45.290527    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:45.290810    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:45.534567    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:45.579650    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:45.789540    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:45.790714    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:46.034696    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:46.079791    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:46.289193    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:46.291163    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:46.534660    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:46.579923    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:46.789275    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:46.790434    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 17:13:46.951327    5077 node_ready.go:57] node "addons-164474" has "Ready":"False" status (will retry)
	I1018 17:13:47.035069    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:47.079980    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:47.289140    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:47.289621    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:47.534879    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:47.579835    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:47.810454    5077 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 17:13:47.810539    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:47.813645    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:47.973905    5077 node_ready.go:49] node "addons-164474" is "Ready"
	I1018 17:13:47.973937    5077 node_ready.go:38] duration metric: took 39.52576049s for node "addons-164474" to be "Ready" ...
	I1018 17:13:47.973950    5077 api_server.go:52] waiting for apiserver process to appear ...
	I1018 17:13:47.974008    5077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:13:47.992493    5077 api_server.go:72] duration metric: took 41.609038263s to wait for apiserver process to appear ...
	I1018 17:13:47.992514    5077 api_server.go:88] waiting for apiserver healthz status ...
	I1018 17:13:47.992533    5077 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 17:13:48.061263    5077 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 17:13:48.064520    5077 api_server.go:141] control plane version: v1.34.1
	I1018 17:13:48.064566    5077 api_server.go:131] duration metric: took 72.044897ms to wait for apiserver health ...
	I1018 17:13:48.064575    5077 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 17:13:48.073352    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:48.105818    5077 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 17:13:48.105842    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:48.108261    5077 system_pods.go:59] 19 kube-system pods found
	I1018 17:13:48.108302    5077 system_pods.go:61] "coredns-66bc5c9577-467ch" [b89aeb20-752c-43e2-b8bb-580999350080] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:13:48.108312    5077 system_pods.go:61] "csi-hostpath-attacher-0" [a96e6e91-3fd7-4a35-96b6-dc9078bc0615] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 17:13:48.108318    5077 system_pods.go:61] "csi-hostpath-resizer-0" [2fbebb55-78e5-4594-9979-227aec6c93eb] Pending
	I1018 17:13:48.108323    5077 system_pods.go:61] "csi-hostpathplugin-9l87p" [43074c41-36f1-48ed-85da-9f4166509d86] Pending
	I1018 17:13:48.108327    5077 system_pods.go:61] "etcd-addons-164474" [0c1d9b95-efb3-41c6-ad13-ecdc5e2aed23] Running
	I1018 17:13:48.108331    5077 system_pods.go:61] "kindnet-hsvb9" [70417575-3af4-4899-aaf4-eb73d8dc18fc] Running
	I1018 17:13:48.108338    5077 system_pods.go:61] "kube-apiserver-addons-164474" [d7998556-38e2-44d9-b248-e8168e01f0b7] Running
	I1018 17:13:48.108343    5077 system_pods.go:61] "kube-controller-manager-addons-164474" [66eae987-1a94-433b-8387-4fe4b8f54f6d] Running
	I1018 17:13:48.108356    5077 system_pods.go:61] "kube-ingress-dns-minikube" [587c439a-5adf-4c7a-b2cf-37b34fbc7fe4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 17:13:48.108364    5077 system_pods.go:61] "kube-proxy-ccs4c" [07b2f86d-366e-47c9-8aad-6b7b51f33565] Running
	I1018 17:13:48.108370    5077 system_pods.go:61] "kube-scheduler-addons-164474" [e288ffed-0c2c-4993-8d34-daa6250e509d] Running
	I1018 17:13:48.108384    5077 system_pods.go:61] "metrics-server-85b7d694d7-8dnml" [7cf655d5-48cd-488d-9cbd-b19f09925a22] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 17:13:48.108389    5077 system_pods.go:61] "nvidia-device-plugin-daemonset-w6sqz" [ef275008-60c3-4bde-a747-35f70a06cb02] Pending
	I1018 17:13:48.108402    5077 system_pods.go:61] "registry-6b586f9694-fwkz8" [d12f3e97-a0a1-4ac6-aa88-1e38730ecf05] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 17:13:48.108408    5077 system_pods.go:61] "registry-creds-764b6fb674-k267j" [66a2f897-d4c3-4ebf-a15a-51183d31deaa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 17:13:48.108418    5077 system_pods.go:61] "registry-proxy-6x6dm" [7cc511f1-c2a0-4516-85b6-eee6876bc7ae] Pending
	I1018 17:13:48.108423    5077 system_pods.go:61] "snapshot-controller-7d9fbc56b8-f8bm6" [96f5066a-4702-41dc-b553-575c361e1501] Pending
	I1018 17:13:48.108428    5077 system_pods.go:61] "snapshot-controller-7d9fbc56b8-gnvj9" [a5334265-1b53-49d6-95e8-60a89ea17d73] Pending
	I1018 17:13:48.108432    5077 system_pods.go:61] "storage-provisioner" [600b4ef5-41ba-4562-8384-bcfb6ce65634] Pending
	I1018 17:13:48.108437    5077 system_pods.go:74] duration metric: took 43.85736ms to wait for pod list to return data ...
	I1018 17:13:48.108448    5077 default_sa.go:34] waiting for default service account to be created ...
	I1018 17:13:48.117539    5077 default_sa.go:45] found service account: "default"
	I1018 17:13:48.117566    5077 default_sa.go:55] duration metric: took 9.112193ms for default service account to be created ...
	I1018 17:13:48.117575    5077 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 17:13:48.142095    5077 system_pods.go:86] 19 kube-system pods found
	I1018 17:13:48.142134    5077 system_pods.go:89] "coredns-66bc5c9577-467ch" [b89aeb20-752c-43e2-b8bb-580999350080] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:13:48.142144    5077 system_pods.go:89] "csi-hostpath-attacher-0" [a96e6e91-3fd7-4a35-96b6-dc9078bc0615] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 17:13:48.142152    5077 system_pods.go:89] "csi-hostpath-resizer-0" [2fbebb55-78e5-4594-9979-227aec6c93eb] Pending
	I1018 17:13:48.142157    5077 system_pods.go:89] "csi-hostpathplugin-9l87p" [43074c41-36f1-48ed-85da-9f4166509d86] Pending
	I1018 17:13:48.142161    5077 system_pods.go:89] "etcd-addons-164474" [0c1d9b95-efb3-41c6-ad13-ecdc5e2aed23] Running
	I1018 17:13:48.142165    5077 system_pods.go:89] "kindnet-hsvb9" [70417575-3af4-4899-aaf4-eb73d8dc18fc] Running
	I1018 17:13:48.142170    5077 system_pods.go:89] "kube-apiserver-addons-164474" [d7998556-38e2-44d9-b248-e8168e01f0b7] Running
	I1018 17:13:48.142174    5077 system_pods.go:89] "kube-controller-manager-addons-164474" [66eae987-1a94-433b-8387-4fe4b8f54f6d] Running
	I1018 17:13:48.142184    5077 system_pods.go:89] "kube-ingress-dns-minikube" [587c439a-5adf-4c7a-b2cf-37b34fbc7fe4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 17:13:48.142189    5077 system_pods.go:89] "kube-proxy-ccs4c" [07b2f86d-366e-47c9-8aad-6b7b51f33565] Running
	I1018 17:13:48.142196    5077 system_pods.go:89] "kube-scheduler-addons-164474" [e288ffed-0c2c-4993-8d34-daa6250e509d] Running
	I1018 17:13:48.142204    5077 system_pods.go:89] "metrics-server-85b7d694d7-8dnml" [7cf655d5-48cd-488d-9cbd-b19f09925a22] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 17:13:48.142213    5077 system_pods.go:89] "nvidia-device-plugin-daemonset-w6sqz" [ef275008-60c3-4bde-a747-35f70a06cb02] Pending
	I1018 17:13:48.142219    5077 system_pods.go:89] "registry-6b586f9694-fwkz8" [d12f3e97-a0a1-4ac6-aa88-1e38730ecf05] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 17:13:48.142225    5077 system_pods.go:89] "registry-creds-764b6fb674-k267j" [66a2f897-d4c3-4ebf-a15a-51183d31deaa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 17:13:48.142233    5077 system_pods.go:89] "registry-proxy-6x6dm" [7cc511f1-c2a0-4516-85b6-eee6876bc7ae] Pending
	I1018 17:13:48.142237    5077 system_pods.go:89] "snapshot-controller-7d9fbc56b8-f8bm6" [96f5066a-4702-41dc-b553-575c361e1501] Pending
	I1018 17:13:48.142241    5077 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gnvj9" [a5334265-1b53-49d6-95e8-60a89ea17d73] Pending
	I1018 17:13:48.142252    5077 system_pods.go:89] "storage-provisioner" [600b4ef5-41ba-4562-8384-bcfb6ce65634] Pending
	I1018 17:13:48.142265    5077 retry.go:31] will retry after 201.706754ms: missing components: kube-dns
	I1018 17:13:48.300880    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:48.305091    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:48.391459    5077 system_pods.go:86] 19 kube-system pods found
	I1018 17:13:48.391491    5077 system_pods.go:89] "coredns-66bc5c9577-467ch" [b89aeb20-752c-43e2-b8bb-580999350080] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:13:48.391499    5077 system_pods.go:89] "csi-hostpath-attacher-0" [a96e6e91-3fd7-4a35-96b6-dc9078bc0615] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 17:13:48.391506    5077 system_pods.go:89] "csi-hostpath-resizer-0" [2fbebb55-78e5-4594-9979-227aec6c93eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 17:13:48.391513    5077 system_pods.go:89] "csi-hostpathplugin-9l87p" [43074c41-36f1-48ed-85da-9f4166509d86] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 17:13:48.391518    5077 system_pods.go:89] "etcd-addons-164474" [0c1d9b95-efb3-41c6-ad13-ecdc5e2aed23] Running
	I1018 17:13:48.391528    5077 system_pods.go:89] "kindnet-hsvb9" [70417575-3af4-4899-aaf4-eb73d8dc18fc] Running
	I1018 17:13:48.391534    5077 system_pods.go:89] "kube-apiserver-addons-164474" [d7998556-38e2-44d9-b248-e8168e01f0b7] Running
	I1018 17:13:48.391547    5077 system_pods.go:89] "kube-controller-manager-addons-164474" [66eae987-1a94-433b-8387-4fe4b8f54f6d] Running
	I1018 17:13:48.391555    5077 system_pods.go:89] "kube-ingress-dns-minikube" [587c439a-5adf-4c7a-b2cf-37b34fbc7fe4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 17:13:48.391560    5077 system_pods.go:89] "kube-proxy-ccs4c" [07b2f86d-366e-47c9-8aad-6b7b51f33565] Running
	I1018 17:13:48.391564    5077 system_pods.go:89] "kube-scheduler-addons-164474" [e288ffed-0c2c-4993-8d34-daa6250e509d] Running
	I1018 17:13:48.391573    5077 system_pods.go:89] "metrics-server-85b7d694d7-8dnml" [7cf655d5-48cd-488d-9cbd-b19f09925a22] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 17:13:48.391580    5077 system_pods.go:89] "nvidia-device-plugin-daemonset-w6sqz" [ef275008-60c3-4bde-a747-35f70a06cb02] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 17:13:48.391588    5077 system_pods.go:89] "registry-6b586f9694-fwkz8" [d12f3e97-a0a1-4ac6-aa88-1e38730ecf05] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 17:13:48.391594    5077 system_pods.go:89] "registry-creds-764b6fb674-k267j" [66a2f897-d4c3-4ebf-a15a-51183d31deaa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 17:13:48.391606    5077 system_pods.go:89] "registry-proxy-6x6dm" [7cc511f1-c2a0-4516-85b6-eee6876bc7ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 17:13:48.391614    5077 system_pods.go:89] "snapshot-controller-7d9fbc56b8-f8bm6" [96f5066a-4702-41dc-b553-575c361e1501] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 17:13:48.391619    5077 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gnvj9" [a5334265-1b53-49d6-95e8-60a89ea17d73] Pending
	I1018 17:13:48.391628    5077 system_pods.go:89] "storage-provisioner" [600b4ef5-41ba-4562-8384-bcfb6ce65634] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 17:13:48.391642    5077 retry.go:31] will retry after 266.916872ms: missing components: kube-dns
	I1018 17:13:48.566485    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:48.671711    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:48.681165    5077 system_pods.go:86] 19 kube-system pods found
	I1018 17:13:48.681200    5077 system_pods.go:89] "coredns-66bc5c9577-467ch" [b89aeb20-752c-43e2-b8bb-580999350080] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:13:48.681209    5077 system_pods.go:89] "csi-hostpath-attacher-0" [a96e6e91-3fd7-4a35-96b6-dc9078bc0615] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 17:13:48.681217    5077 system_pods.go:89] "csi-hostpath-resizer-0" [2fbebb55-78e5-4594-9979-227aec6c93eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 17:13:48.681223    5077 system_pods.go:89] "csi-hostpathplugin-9l87p" [43074c41-36f1-48ed-85da-9f4166509d86] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 17:13:48.681228    5077 system_pods.go:89] "etcd-addons-164474" [0c1d9b95-efb3-41c6-ad13-ecdc5e2aed23] Running
	I1018 17:13:48.681233    5077 system_pods.go:89] "kindnet-hsvb9" [70417575-3af4-4899-aaf4-eb73d8dc18fc] Running
	I1018 17:13:48.681237    5077 system_pods.go:89] "kube-apiserver-addons-164474" [d7998556-38e2-44d9-b248-e8168e01f0b7] Running
	I1018 17:13:48.681243    5077 system_pods.go:89] "kube-controller-manager-addons-164474" [66eae987-1a94-433b-8387-4fe4b8f54f6d] Running
	I1018 17:13:48.681251    5077 system_pods.go:89] "kube-ingress-dns-minikube" [587c439a-5adf-4c7a-b2cf-37b34fbc7fe4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 17:13:48.681258    5077 system_pods.go:89] "kube-proxy-ccs4c" [07b2f86d-366e-47c9-8aad-6b7b51f33565] Running
	I1018 17:13:48.681263    5077 system_pods.go:89] "kube-scheduler-addons-164474" [e288ffed-0c2c-4993-8d34-daa6250e509d] Running
	I1018 17:13:48.681269    5077 system_pods.go:89] "metrics-server-85b7d694d7-8dnml" [7cf655d5-48cd-488d-9cbd-b19f09925a22] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 17:13:48.681288    5077 system_pods.go:89] "nvidia-device-plugin-daemonset-w6sqz" [ef275008-60c3-4bde-a747-35f70a06cb02] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 17:13:48.681301    5077 system_pods.go:89] "registry-6b586f9694-fwkz8" [d12f3e97-a0a1-4ac6-aa88-1e38730ecf05] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 17:13:48.681307    5077 system_pods.go:89] "registry-creds-764b6fb674-k267j" [66a2f897-d4c3-4ebf-a15a-51183d31deaa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 17:13:48.681318    5077 system_pods.go:89] "registry-proxy-6x6dm" [7cc511f1-c2a0-4516-85b6-eee6876bc7ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 17:13:48.681326    5077 system_pods.go:89] "snapshot-controller-7d9fbc56b8-f8bm6" [96f5066a-4702-41dc-b553-575c361e1501] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 17:13:48.681333    5077 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gnvj9" [a5334265-1b53-49d6-95e8-60a89ea17d73] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 17:13:48.681341    5077 system_pods.go:89] "storage-provisioner" [600b4ef5-41ba-4562-8384-bcfb6ce65634] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 17:13:48.681355    5077 retry.go:31] will retry after 436.900491ms: missing components: kube-dns
	I1018 17:13:48.794455    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:48.794761    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:49.035150    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:49.080093    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:49.183226    5077 system_pods.go:86] 19 kube-system pods found
	I1018 17:13:49.183266    5077 system_pods.go:89] "coredns-66bc5c9577-467ch" [b89aeb20-752c-43e2-b8bb-580999350080] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:13:49.183275    5077 system_pods.go:89] "csi-hostpath-attacher-0" [a96e6e91-3fd7-4a35-96b6-dc9078bc0615] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 17:13:49.183283    5077 system_pods.go:89] "csi-hostpath-resizer-0" [2fbebb55-78e5-4594-9979-227aec6c93eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 17:13:49.183291    5077 system_pods.go:89] "csi-hostpathplugin-9l87p" [43074c41-36f1-48ed-85da-9f4166509d86] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 17:13:49.183299    5077 system_pods.go:89] "etcd-addons-164474" [0c1d9b95-efb3-41c6-ad13-ecdc5e2aed23] Running
	I1018 17:13:49.183320    5077 system_pods.go:89] "kindnet-hsvb9" [70417575-3af4-4899-aaf4-eb73d8dc18fc] Running
	I1018 17:13:49.183329    5077 system_pods.go:89] "kube-apiserver-addons-164474" [d7998556-38e2-44d9-b248-e8168e01f0b7] Running
	I1018 17:13:49.183334    5077 system_pods.go:89] "kube-controller-manager-addons-164474" [66eae987-1a94-433b-8387-4fe4b8f54f6d] Running
	I1018 17:13:49.183341    5077 system_pods.go:89] "kube-ingress-dns-minikube" [587c439a-5adf-4c7a-b2cf-37b34fbc7fe4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 17:13:49.183349    5077 system_pods.go:89] "kube-proxy-ccs4c" [07b2f86d-366e-47c9-8aad-6b7b51f33565] Running
	I1018 17:13:49.183356    5077 system_pods.go:89] "kube-scheduler-addons-164474" [e288ffed-0c2c-4993-8d34-daa6250e509d] Running
	I1018 17:13:49.183362    5077 system_pods.go:89] "metrics-server-85b7d694d7-8dnml" [7cf655d5-48cd-488d-9cbd-b19f09925a22] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 17:13:49.183372    5077 system_pods.go:89] "nvidia-device-plugin-daemonset-w6sqz" [ef275008-60c3-4bde-a747-35f70a06cb02] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 17:13:49.183380    5077 system_pods.go:89] "registry-6b586f9694-fwkz8" [d12f3e97-a0a1-4ac6-aa88-1e38730ecf05] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 17:13:49.183386    5077 system_pods.go:89] "registry-creds-764b6fb674-k267j" [66a2f897-d4c3-4ebf-a15a-51183d31deaa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 17:13:49.183396    5077 system_pods.go:89] "registry-proxy-6x6dm" [7cc511f1-c2a0-4516-85b6-eee6876bc7ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 17:13:49.183404    5077 system_pods.go:89] "snapshot-controller-7d9fbc56b8-f8bm6" [96f5066a-4702-41dc-b553-575c361e1501] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 17:13:49.183413    5077 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gnvj9" [a5334265-1b53-49d6-95e8-60a89ea17d73] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 17:13:49.183419    5077 system_pods.go:89] "storage-provisioner" [600b4ef5-41ba-4562-8384-bcfb6ce65634] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 17:13:49.183437    5077 retry.go:31] will retry after 559.053592ms: missing components: kube-dns
	I1018 17:13:49.290196    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:49.292148    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:49.551290    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:49.648052    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:49.749508    5077 system_pods.go:86] 19 kube-system pods found
	I1018 17:13:49.749591    5077 system_pods.go:89] "coredns-66bc5c9577-467ch" [b89aeb20-752c-43e2-b8bb-580999350080] Running
	I1018 17:13:49.749607    5077 system_pods.go:89] "csi-hostpath-attacher-0" [a96e6e91-3fd7-4a35-96b6-dc9078bc0615] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 17:13:49.749617    5077 system_pods.go:89] "csi-hostpath-resizer-0" [2fbebb55-78e5-4594-9979-227aec6c93eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 17:13:49.749628    5077 system_pods.go:89] "csi-hostpathplugin-9l87p" [43074c41-36f1-48ed-85da-9f4166509d86] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 17:13:49.749633    5077 system_pods.go:89] "etcd-addons-164474" [0c1d9b95-efb3-41c6-ad13-ecdc5e2aed23] Running
	I1018 17:13:49.749638    5077 system_pods.go:89] "kindnet-hsvb9" [70417575-3af4-4899-aaf4-eb73d8dc18fc] Running
	I1018 17:13:49.749647    5077 system_pods.go:89] "kube-apiserver-addons-164474" [d7998556-38e2-44d9-b248-e8168e01f0b7] Running
	I1018 17:13:49.749652    5077 system_pods.go:89] "kube-controller-manager-addons-164474" [66eae987-1a94-433b-8387-4fe4b8f54f6d] Running
	I1018 17:13:49.749664    5077 system_pods.go:89] "kube-ingress-dns-minikube" [587c439a-5adf-4c7a-b2cf-37b34fbc7fe4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 17:13:49.749669    5077 system_pods.go:89] "kube-proxy-ccs4c" [07b2f86d-366e-47c9-8aad-6b7b51f33565] Running
	I1018 17:13:49.749674    5077 system_pods.go:89] "kube-scheduler-addons-164474" [e288ffed-0c2c-4993-8d34-daa6250e509d] Running
	I1018 17:13:49.749682    5077 system_pods.go:89] "metrics-server-85b7d694d7-8dnml" [7cf655d5-48cd-488d-9cbd-b19f09925a22] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 17:13:49.749695    5077 system_pods.go:89] "nvidia-device-plugin-daemonset-w6sqz" [ef275008-60c3-4bde-a747-35f70a06cb02] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 17:13:49.749705    5077 system_pods.go:89] "registry-6b586f9694-fwkz8" [d12f3e97-a0a1-4ac6-aa88-1e38730ecf05] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 17:13:49.749713    5077 system_pods.go:89] "registry-creds-764b6fb674-k267j" [66a2f897-d4c3-4ebf-a15a-51183d31deaa] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 17:13:49.749719    5077 system_pods.go:89] "registry-proxy-6x6dm" [7cc511f1-c2a0-4516-85b6-eee6876bc7ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 17:13:49.749729    5077 system_pods.go:89] "snapshot-controller-7d9fbc56b8-f8bm6" [96f5066a-4702-41dc-b553-575c361e1501] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 17:13:49.749738    5077 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gnvj9" [a5334265-1b53-49d6-95e8-60a89ea17d73] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 17:13:49.749742    5077 system_pods.go:89] "storage-provisioner" [600b4ef5-41ba-4562-8384-bcfb6ce65634] Running
	I1018 17:13:49.749757    5077 system_pods.go:126] duration metric: took 1.63217472s to wait for k8s-apps to be running ...
	I1018 17:13:49.749765    5077 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 17:13:49.749833    5077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 17:13:49.763695    5077 system_svc.go:56] duration metric: took 13.920925ms WaitForService to wait for kubelet
	I1018 17:13:49.763723    5077 kubeadm.go:586] duration metric: took 43.380273046s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:13:49.763743    5077 node_conditions.go:102] verifying NodePressure condition ...
	I1018 17:13:49.766864    5077 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:13:49.766898    5077 node_conditions.go:123] node cpu capacity is 2
	I1018 17:13:49.766912    5077 node_conditions.go:105] duration metric: took 3.163934ms to run NodePressure ...
	I1018 17:13:49.766925    5077 start.go:241] waiting for startup goroutines ...
	I1018 17:13:49.848482    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:49.848656    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:50.035523    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:50.080088    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:50.288929    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:50.290243    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:50.536098    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:50.637083    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:50.790782    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:50.791549    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:51.034914    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:51.080606    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:51.291138    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:51.292273    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:51.534695    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:51.580520    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:51.791852    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:51.792151    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:52.037534    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:52.080776    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:52.301228    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:52.304133    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:52.552765    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:52.606983    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:52.796353    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:52.796558    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:53.037809    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:53.083482    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:53.292273    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:53.292404    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:53.331839    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:13:53.534732    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:53.581227    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:53.794013    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:53.794332    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:54.038219    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:54.082525    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:54.292513    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:54.292897    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:54.535091    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:54.597005    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:54.791689    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:54.791821    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:54.931060    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.59918827s)
	W1018 17:13:54.931092    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:54.931110    5077 retry.go:31] will retry after 7.871158109s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:13:55.043010    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:55.080383    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:55.290166    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:55.290866    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:55.535259    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:55.636879    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:55.790734    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:55.790932    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:56.035017    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:56.080729    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:56.290610    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:56.291978    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:56.535146    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:56.579954    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:56.795252    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:56.795261    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:57.034911    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:57.080546    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:57.292044    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:57.292499    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:57.535099    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:57.581030    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:57.792225    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:57.792717    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:58.035716    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:58.080016    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:58.289954    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:58.290356    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:58.534032    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:58.582557    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:58.792328    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:58.792451    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:59.034416    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:59.079708    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:59.290418    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:59.290580    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:13:59.534833    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:13:59.580129    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:13:59.793303    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:13:59.793971    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:00.039345    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:00.092404    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:00.302971    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:00.303525    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:00.535647    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:00.581539    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:00.793662    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:00.794128    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:01.034990    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:01.080250    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:01.291592    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:01.291933    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:01.535263    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:01.579851    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:01.791302    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:01.792231    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:02.034377    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:02.079481    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:02.291480    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:02.291756    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:02.535325    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:02.580360    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:02.792118    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:02.792415    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:02.802690    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:14:03.034701    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:03.080431    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:03.291771    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:03.291944    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:03.535135    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:03.606539    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:03.791351    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:03.791608    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:03.869617    5077 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.066879323s)
	W1018 17:14:03.869657    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:14:03.869704    5077 retry.go:31] will retry after 30.307679297s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 17:14:04.034262    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:04.080318    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:04.289665    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:04.290834    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:04.534064    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:04.579983    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:04.789497    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:04.790341    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:05.034512    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:05.079535    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:05.291183    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:05.292431    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:05.537310    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:05.580514    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:05.790359    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:05.790468    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:06.035296    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:06.080064    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:06.288785    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:06.290869    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:06.534735    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:06.579929    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:06.791220    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:06.792453    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:07.034756    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:07.080515    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:07.291262    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:07.291567    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:07.534768    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:07.580466    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:07.792012    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:07.792357    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:08.035334    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:08.080030    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:08.290040    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:08.291686    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:08.534713    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:08.580214    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:08.790736    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:08.791113    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:09.034748    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:09.080578    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:09.291857    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:09.293403    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:09.534587    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:09.579887    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:09.791961    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:09.792203    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:10.035315    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:10.082242    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:10.289954    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:10.290233    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:10.534727    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:10.582485    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:10.790125    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:10.791229    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:11.034551    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:11.080044    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:11.291275    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:11.291695    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:11.535011    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:11.579956    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:11.789481    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:11.790550    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:12.035011    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:12.080870    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:12.290428    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:12.290601    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:12.535053    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:12.579975    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:12.792581    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:12.793025    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:13.035696    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:13.080758    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:13.291547    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:13.291800    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:13.534636    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:13.579725    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:13.791065    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:13.790916    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:14.034391    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:14.080547    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:14.292494    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:14.293025    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:14.534350    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:14.580204    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:14.792058    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:14.792976    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:15.035559    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:15.081218    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:15.292290    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:15.292683    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:15.539811    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:15.582618    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:15.790386    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:15.790503    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:16.034145    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:16.078921    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:16.289172    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:16.289782    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:16.535236    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:16.579892    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:16.789239    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:16.790407    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:17.035041    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:17.135357    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:17.289518    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:17.290319    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:17.534527    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:17.579714    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:17.791391    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:17.791959    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:18.034175    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:18.079597    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:18.290924    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:18.292014    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:18.534574    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:18.579909    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:18.790921    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:18.791384    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:19.034940    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:19.080076    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:19.289973    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:19.290122    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:19.535169    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:19.579730    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:19.790908    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:19.791130    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:20.035331    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:20.080171    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:20.291025    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:20.291580    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:20.535133    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:20.587796    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:20.790859    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:20.790924    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:21.035299    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:21.079661    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:21.290275    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:21.290364    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:21.534619    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:21.580436    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:21.792212    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:21.792466    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:22.035081    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:22.080990    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:22.290688    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:22.291098    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:22.534601    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:22.580259    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:22.791566    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:22.792174    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:23.035624    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:23.080879    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:23.291429    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:23.291995    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:23.535530    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:23.580041    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:23.790963    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:23.791005    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:24.034785    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:24.080182    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:24.289549    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:24.291509    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:24.534429    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:24.579674    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:24.791698    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:24.792087    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:25.035966    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:25.081041    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:25.291816    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:25.292213    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:25.534644    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:25.581262    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:25.790848    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:25.791221    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:26.034038    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:26.080481    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:26.289955    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:26.290796    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:26.535339    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:26.580372    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:26.789705    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:26.790763    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:27.035180    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:27.079781    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:27.290675    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:27.291345    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:27.535540    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:27.581023    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:27.791315    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:27.792461    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:28.036074    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:28.080211    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:28.289359    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:28.291602    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:28.534688    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:28.579974    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:28.790452    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:28.790839    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:29.035156    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:29.080179    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:29.290352    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:29.290544    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:29.534020    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:29.580643    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:29.791651    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:29.792133    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:30.050732    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:30.080384    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:30.291060    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:30.291422    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:30.536509    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:30.579566    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:30.790541    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:30.792058    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:31.035412    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:31.137176    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:31.290182    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:31.290371    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:31.534553    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:31.580372    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:31.799864    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:31.800170    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:32.037155    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:32.080451    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:32.290096    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:32.291652    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:32.534805    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:32.580540    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:32.791640    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:32.791760    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:33.035124    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:33.079786    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:33.291255    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:33.291593    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:33.534804    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:33.580446    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:33.792277    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:33.792648    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:34.035187    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:34.079474    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:34.177858    5077 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 17:14:34.290135    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:34.291314    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:34.534574    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:34.580252    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:34.795847    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:34.796141    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:35.035000    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 17:14:35.066747    5077 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 17:14:35.066900    5077 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 17:14:35.081034    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:35.290468    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:35.290642    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:35.534831    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:35.579951    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:35.799589    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:35.799778    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:36.035725    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:36.080869    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:36.289234    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:36.290819    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:36.535089    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:36.580039    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:36.791434    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:36.791593    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:37.050147    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:37.079695    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:37.291068    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:37.291228    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:37.534790    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:37.581095    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:37.790879    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 17:14:37.791245    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:38.036155    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:38.079795    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:38.290430    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:38.290571    5077 kapi.go:107] duration metric: took 1m25.504550646s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 17:14:38.535093    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:38.579428    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:38.790733    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:39.036204    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:39.141502    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:39.290511    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:39.534842    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:39.580461    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:39.791575    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:40.040132    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:40.079438    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:40.290916    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:40.535294    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:40.579620    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:40.790675    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:41.035065    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:41.136808    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:41.291232    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:41.535163    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:41.580405    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:41.791439    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:42.034863    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:42.081265    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:42.292897    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:42.535124    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:42.592629    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:42.791320    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:43.034861    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:43.080184    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:43.290101    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:43.535367    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:43.579233    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:43.790827    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:44.035083    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:44.080852    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:44.291276    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:44.534767    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:44.580080    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:44.790842    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:45.043186    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:45.082189    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:45.292803    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:45.535099    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 17:14:45.579193    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:45.790849    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:46.035240    5077 kapi.go:107] duration metric: took 1m29.504005172s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 17:14:46.038418    5077 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-164474 cluster.
	I1018 17:14:46.041324    5077 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 17:14:46.044129    5077 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 17:14:46.079519    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:46.291312    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:46.580204    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:46.790563    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:47.080927    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:47.291758    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:47.579886    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:47.791394    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:48.079825    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:48.290420    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:48.580149    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:48.790514    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:49.080479    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:49.291227    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:49.580238    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:49.790329    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:50.082058    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:50.290595    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:50.580091    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:50.790570    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:51.080345    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:51.290742    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:51.580714    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:51.791206    5077 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 17:14:52.079983    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:52.290032    5077 kapi.go:107] duration metric: took 1m39.50294224s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 17:14:52.579634    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:53.131207    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:53.580073    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:54.079750    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:54.579926    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:55.087833    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:55.579664    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:56.080005    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:56.579992    5077 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 17:14:57.083663    5077 kapi.go:107] duration metric: took 1m44.007564352s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 17:14:57.086614    5077 out.go:179] * Enabled addons: cloud-spanner, storage-provisioner, registry-creds, nvidia-device-plugin, amd-gpu-device-plugin, default-storageclass, storage-provisioner-rancher, metrics-server, ingress-dns, yakd, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1018 17:14:57.089327    5077 addons.go:514] duration metric: took 1m50.705604027s for enable addons: enabled=[cloud-spanner storage-provisioner registry-creds nvidia-device-plugin amd-gpu-device-plugin default-storageclass storage-provisioner-rancher metrics-server ingress-dns yakd volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1018 17:14:57.089393    5077 start.go:246] waiting for cluster config update ...
	I1018 17:14:57.089418    5077 start.go:255] writing updated cluster config ...
	I1018 17:14:57.090664    5077 ssh_runner.go:195] Run: rm -f paused
	I1018 17:14:57.095142    5077 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 17:14:57.098531    5077 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-467ch" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:14:57.106219    5077 pod_ready.go:94] pod "coredns-66bc5c9577-467ch" is "Ready"
	I1018 17:14:57.106250    5077 pod_ready.go:86] duration metric: took 7.699234ms for pod "coredns-66bc5c9577-467ch" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:14:57.109072    5077 pod_ready.go:83] waiting for pod "etcd-addons-164474" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:14:57.114011    5077 pod_ready.go:94] pod "etcd-addons-164474" is "Ready"
	I1018 17:14:57.114033    5077 pod_ready.go:86] duration metric: took 4.934896ms for pod "etcd-addons-164474" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:14:57.116459    5077 pod_ready.go:83] waiting for pod "kube-apiserver-addons-164474" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:14:57.121646    5077 pod_ready.go:94] pod "kube-apiserver-addons-164474" is "Ready"
	I1018 17:14:57.121718    5077 pod_ready.go:86] duration metric: took 5.239516ms for pod "kube-apiserver-addons-164474" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:14:57.124376    5077 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-164474" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:14:57.499640    5077 pod_ready.go:94] pod "kube-controller-manager-addons-164474" is "Ready"
	I1018 17:14:57.499668    5077 pod_ready.go:86] duration metric: took 375.270436ms for pod "kube-controller-manager-addons-164474" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:14:57.698756    5077 pod_ready.go:83] waiting for pod "kube-proxy-ccs4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:14:58.099038    5077 pod_ready.go:94] pod "kube-proxy-ccs4c" is "Ready"
	I1018 17:14:58.099119    5077 pod_ready.go:86] duration metric: took 400.334533ms for pod "kube-proxy-ccs4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:14:58.299424    5077 pod_ready.go:83] waiting for pod "kube-scheduler-addons-164474" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:14:58.699345    5077 pod_ready.go:94] pod "kube-scheduler-addons-164474" is "Ready"
	I1018 17:14:58.699375    5077 pod_ready.go:86] duration metric: took 399.921506ms for pod "kube-scheduler-addons-164474" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:14:58.699388    5077 pod_ready.go:40] duration metric: took 1.604217812s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 17:14:59.099655    5077 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 17:14:59.102994    5077 out.go:179] * Done! kubectl is now configured to use "addons-164474" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 17:15:00 addons-164474 crio[831]: time="2025-10-18T17:15:00.622831529Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1281c408b40f382847f6b49db2b696d162aaf1c836a24120ff17bca35ffec910 UID:a212ae5b-eb4f-4f94-a0e8-d10307a75f8b NetNS:/var/run/netns/8896bfde-c7e7-475e-a379-07632226f8e8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400049ed28}] Aliases:map[]}"
	Oct 18 17:15:00 addons-164474 crio[831]: time="2025-10-18T17:15:00.623188531Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 17:15:00 addons-164474 crio[831]: time="2025-10-18T17:15:00.627823903Z" level=info msg="Ran pod sandbox 1281c408b40f382847f6b49db2b696d162aaf1c836a24120ff17bca35ffec910 with infra container: default/busybox/POD" id=2f8eea3e-0e39-41bb-93fb-f884f9408688 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 17:15:00 addons-164474 crio[831]: time="2025-10-18T17:15:00.629851689Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4e5334b7-9701-47e6-9af8-9d4ece79a754 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 17:15:00 addons-164474 crio[831]: time="2025-10-18T17:15:00.630203284Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=4e5334b7-9701-47e6-9af8-9d4ece79a754 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 17:15:00 addons-164474 crio[831]: time="2025-10-18T17:15:00.630343126Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=4e5334b7-9701-47e6-9af8-9d4ece79a754 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 17:15:00 addons-164474 crio[831]: time="2025-10-18T17:15:00.633731046Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5122ac12-a91d-4c10-bcd8-c3e34c71d5d9 name=/runtime.v1.ImageService/PullImage
	Oct 18 17:15:00 addons-164474 crio[831]: time="2025-10-18T17:15:00.640192002Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 17:15:01 addons-164474 crio[831]: time="2025-10-18T17:15:01.246432833Z" level=info msg="Removing container: 998e9336ca2259de54c8d65925937629f786cff34787fb44f4f11aaf16d2e104" id=0bc38b1e-0c33-490e-a7f7-fd9516416365 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 17:15:01 addons-164474 crio[831]: time="2025-10-18T17:15:01.248929582Z" level=info msg="Error loading conmon cgroup of container 998e9336ca2259de54c8d65925937629f786cff34787fb44f4f11aaf16d2e104: cgroup deleted" id=0bc38b1e-0c33-490e-a7f7-fd9516416365 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 17:15:01 addons-164474 crio[831]: time="2025-10-18T17:15:01.256880356Z" level=info msg="Removed container 998e9336ca2259de54c8d65925937629f786cff34787fb44f4f11aaf16d2e104: gcp-auth/gcp-auth-certs-create-6dvfl/create" id=0bc38b1e-0c33-490e-a7f7-fd9516416365 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 17:15:01 addons-164474 crio[831]: time="2025-10-18T17:15:01.260679293Z" level=info msg="Stopping pod sandbox: fb13693cc908dd98aa8ed0d67e9ff8e6d3ecf5aeae96df041d3e08734df49201" id=bbf271fd-1b16-470c-b2c7-30d082f1202d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 17:15:01 addons-164474 crio[831]: time="2025-10-18T17:15:01.260739856Z" level=info msg="Stopped pod sandbox (already stopped): fb13693cc908dd98aa8ed0d67e9ff8e6d3ecf5aeae96df041d3e08734df49201" id=bbf271fd-1b16-470c-b2c7-30d082f1202d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 17:15:01 addons-164474 crio[831]: time="2025-10-18T17:15:01.261245824Z" level=info msg="Removing pod sandbox: fb13693cc908dd98aa8ed0d67e9ff8e6d3ecf5aeae96df041d3e08734df49201" id=c97da849-69c5-4f7b-8e7a-04348ac14713 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 17:15:01 addons-164474 crio[831]: time="2025-10-18T17:15:01.269225751Z" level=info msg="Removed pod sandbox: fb13693cc908dd98aa8ed0d67e9ff8e6d3ecf5aeae96df041d3e08734df49201" id=c97da849-69c5-4f7b-8e7a-04348ac14713 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 17:15:02 addons-164474 crio[831]: time="2025-10-18T17:15:02.724239094Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=5122ac12-a91d-4c10-bcd8-c3e34c71d5d9 name=/runtime.v1.ImageService/PullImage
	Oct 18 17:15:02 addons-164474 crio[831]: time="2025-10-18T17:15:02.724892658Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a0301164-846b-4085-a0fc-9b4b06cdcf1c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 17:15:02 addons-164474 crio[831]: time="2025-10-18T17:15:02.728089388Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=712b9c15-0579-4d44-b5e6-78ec664fa30f name=/runtime.v1.ImageService/ImageStatus
	Oct 18 17:15:02 addons-164474 crio[831]: time="2025-10-18T17:15:02.734300306Z" level=info msg="Creating container: default/busybox/busybox" id=0429b820-4df4-4d8f-a9d5-3ac9546ab5ce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 17:15:02 addons-164474 crio[831]: time="2025-10-18T17:15:02.735071033Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:15:02 addons-164474 crio[831]: time="2025-10-18T17:15:02.741324766Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:15:02 addons-164474 crio[831]: time="2025-10-18T17:15:02.741817843Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:15:02 addons-164474 crio[831]: time="2025-10-18T17:15:02.758110478Z" level=info msg="Created container 6cd2dd6d87a855c2b8b480c2d5373a279c8aa6fa6dcc49c0cf8d2f7e6bdc73c2: default/busybox/busybox" id=0429b820-4df4-4d8f-a9d5-3ac9546ab5ce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 17:15:02 addons-164474 crio[831]: time="2025-10-18T17:15:02.759374553Z" level=info msg="Starting container: 6cd2dd6d87a855c2b8b480c2d5373a279c8aa6fa6dcc49c0cf8d2f7e6bdc73c2" id=809e0d76-6b34-4650-ae59-9666fc420462 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 17:15:02 addons-164474 crio[831]: time="2025-10-18T17:15:02.763182295Z" level=info msg="Started container" PID=5016 containerID=6cd2dd6d87a855c2b8b480c2d5373a279c8aa6fa6dcc49c0cf8d2f7e6bdc73c2 description=default/busybox/busybox id=809e0d76-6b34-4650-ae59-9666fc420462 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1281c408b40f382847f6b49db2b696d162aaf1c836a24120ff17bca35ffec910
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	6cd2dd6d87a85       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          8 seconds ago        Running             busybox                                  0                   1281c408b40f3       busybox                                     default
	968c95a146a7f       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          13 seconds ago       Running             csi-snapshotter                          0                   e97e4d507df8e       csi-hostpathplugin-9l87p                    kube-system
	7657f768a8a9a       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          15 seconds ago       Running             csi-provisioner                          0                   e97e4d507df8e       csi-hostpathplugin-9l87p                    kube-system
	a4777ba56bbe1       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            17 seconds ago       Running             liveness-probe                           0                   e97e4d507df8e       csi-hostpathplugin-9l87p                    kube-system
	cdf72845ca4f0       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           17 seconds ago       Running             hostpath                                 0                   e97e4d507df8e       csi-hostpathplugin-9l87p                    kube-system
	48dfa16c4c6d7       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             19 seconds ago       Running             controller                               0                   5f13b9e551841       ingress-nginx-controller-675c5ddd98-9vsqk   ingress-nginx
	465d642f21c7a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 25 seconds ago       Running             gcp-auth                                 0                   1528d396f0f9f       gcp-auth-78565c9fb4-4hmpz                   gcp-auth
	cbf41849e12c0       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                28 seconds ago       Running             node-driver-registrar                    0                   e97e4d507df8e       csi-hostpathplugin-9l87p                    kube-system
	5a594f8d1f286       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            29 seconds ago       Running             gadget                                   0                   5bd19d7b95a8b       gadget-sh2jw                                gadget
	cd1c762de0b5d       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              33 seconds ago       Running             registry-proxy                           0                   4a23982b51523       registry-proxy-6x6dm                        kube-system
	f97b941babec4       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     37 seconds ago       Running             nvidia-device-plugin-ctr                 0                   f54e1b3b0913d       nvidia-device-plugin-daemonset-w6sqz        kube-system
	608532cbaeefc       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             37 seconds ago       Exited              patch                                    3                   ddca86d7dcb0e       gcp-auth-certs-patch-6vkm4                  gcp-auth
	c763b99ed4a70       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           50 seconds ago       Running             registry                                 0                   dcf177613e7c0       registry-6b586f9694-fwkz8                   kube-system
	26297b4bb5620       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   52 seconds ago       Running             csi-external-health-monitor-controller   0                   e97e4d507df8e       csi-hostpathplugin-9l87p                    kube-system
	ae65eedb55756       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   53 seconds ago       Exited              patch                                    0                   4fa21d6c5ff65       ingress-nginx-admission-patch-xdpbv         ingress-nginx
	14f2f76f82dc9       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      53 seconds ago       Running             volume-snapshot-controller               0                   cc41371971419       snapshot-controller-7d9fbc56b8-f8bm6        kube-system
	676ff2293e1f8       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             54 seconds ago       Running             local-path-provisioner                   0                   e148bce858711       local-path-provisioner-648f6765c9-dssgc     local-path-storage
	901f9bb2898fa       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             55 seconds ago       Running             csi-attacher                             0                   a809314725ed4       csi-hostpath-attacher-0                     kube-system
	8a59f8ac6ef28       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               56 seconds ago       Running             minikube-ingress-dns                     0                   5832b0282c647       kube-ingress-dns-minikube                   kube-system
	ac84eee03f897       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   95ab4ffe1ec2f       ingress-nginx-admission-create-9qw6v        ingress-nginx
	ce684ca523f08       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   f9e45235974e7       snapshot-controller-7d9fbc56b8-gnvj9        kube-system
	137ab15901ce8       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   0bfeed615e7eb       cloud-spanner-emulator-86bd5cbb97-gwtv9     default
	f4d16c0746e45       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   842c5d10e4082       yakd-dashboard-5ff678cb9-v54wx              yakd-dashboard
	f402fe3063f55       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   0e93a9af74be2       csi-hostpath-resizer-0                      kube-system
	6865806b912ab       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   7e18f7c741efd       metrics-server-85b7d694d7-8dnml             kube-system
	8d07fa8a1c45f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   246d92ae28bfb       storage-provisioner                         kube-system
	ece6fd8e36b74       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   f94b0bcd8d10c       coredns-66bc5c9577-467ch                    kube-system
	d12b84a601116       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   58badd650ed5d       kindnet-hsvb9                               kube-system
	d87115dc1b972       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   9feea4b7319dd       kube-proxy-ccs4c                            kube-system
	07f016f168b62       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   3383374da1a74       kube-apiserver-addons-164474                kube-system
	f085bccd65219       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   aa4342b95837d       etcd-addons-164474                          kube-system
	4a1b92f8cd14a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   1c2b2d461293d       kube-controller-manager-addons-164474       kube-system
	246aa3ddddf57       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   50bde38df0541       kube-scheduler-addons-164474                kube-system
	
	
	==> coredns [ece6fd8e36b7414b9ea8a96fa9d85543498f89e17705fa3bc262b1570f482b24] <==
	[INFO] 10.244.0.18:51212 - 41877 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000073396s
	[INFO] 10.244.0.18:51212 - 43255 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002025358s
	[INFO] 10.244.0.18:51212 - 58804 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002463706s
	[INFO] 10.244.0.18:51212 - 59680 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000124703s
	[INFO] 10.244.0.18:51212 - 51654 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000081109s
	[INFO] 10.244.0.18:42452 - 16273 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00013834s
	[INFO] 10.244.0.18:42452 - 16519 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00018703s
	[INFO] 10.244.0.18:59308 - 9218 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000113462s
	[INFO] 10.244.0.18:59308 - 9046 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000164622s
	[INFO] 10.244.0.18:39552 - 61561 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000111345s
	[INFO] 10.244.0.18:39552 - 61356 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000085211s
	[INFO] 10.244.0.18:36246 - 55836 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001374994s
	[INFO] 10.244.0.18:36246 - 56264 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001622546s
	[INFO] 10.244.0.18:50331 - 61718 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000105921s
	[INFO] 10.244.0.18:50331 - 61603 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000149425s
	[INFO] 10.244.0.20:32993 - 701 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000145536s
	[INFO] 10.244.0.20:47126 - 30960 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000106546s
	[INFO] 10.244.0.20:53787 - 54326 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000112781s
	[INFO] 10.244.0.20:58302 - 44288 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000099801s
	[INFO] 10.244.0.20:53817 - 43948 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117073s
	[INFO] 10.244.0.20:60921 - 10992 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000106184s
	[INFO] 10.244.0.20:49984 - 51656 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002101864s
	[INFO] 10.244.0.20:43300 - 52521 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001596659s
	[INFO] 10.244.0.20:49777 - 36046 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002583167s
	[INFO] 10.244.0.20:33970 - 60645 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002864615s
	
	
	==> describe nodes <==
	Name:               addons-164474
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-164474
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=addons-164474
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T17_13_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-164474
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-164474"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:12:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-164474
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:15:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 17:15:04 +0000   Sat, 18 Oct 2025 17:12:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 17:15:04 +0000   Sat, 18 Oct 2025 17:12:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 17:15:04 +0000   Sat, 18 Oct 2025 17:12:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 17:15:04 +0000   Sat, 18 Oct 2025 17:13:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-164474
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                bcbaa3b1-55d6-41b1-a200-9f6a4cc99665
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-86bd5cbb97-gwtv9      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  gadget                      gadget-sh2jw                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  gcp-auth                    gcp-auth-78565c9fb4-4hmpz                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-9vsqk    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         119s
	  kube-system                 coredns-66bc5c9577-467ch                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m5s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 csi-hostpathplugin-9l87p                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 etcd-addons-164474                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m10s
	  kube-system                 kindnet-hsvb9                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m5s
	  kube-system                 kube-apiserver-addons-164474                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-controller-manager-addons-164474        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-ccs4c                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-scheduler-addons-164474                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 metrics-server-85b7d694d7-8dnml              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m1s
	  kube-system                 nvidia-device-plugin-daemonset-w6sqz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 registry-6b586f9694-fwkz8                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 registry-creds-764b6fb674-k267j              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 registry-proxy-6x6dm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 snapshot-controller-7d9fbc56b8-f8bm6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 snapshot-controller-7d9fbc56b8-gnvj9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  local-path-storage          local-path-provisioner-648f6765c9-dssgc      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  yakd-dashboard              yakd-dashboard-5ff678cb9-v54wx               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m3s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  2m16s (x8 over 2m17s)  kubelet          Node addons-164474 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m16s (x8 over 2m17s)  kubelet          Node addons-164474 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m16s (x8 over 2m17s)  kubelet          Node addons-164474 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m10s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m10s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m10s                  kubelet          Node addons-164474 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m10s                  kubelet          Node addons-164474 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m10s                  kubelet          Node addons-164474 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m6s                   node-controller  Node addons-164474 event: Registered Node addons-164474 in Controller
	  Normal   NodeReady                84s                    kubelet          Node addons-164474 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014995] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035776] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.808632] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.418900] kauditd_printk_skb: 36 callbacks suppressed
	[Oct18 17:12] overlayfs: idmapped layers are currently not supported
	[  +0.082393] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [f085bccd65219cd8bb8d59ffcc8bee71589bead44d17e3e6fe5269fe6781f2f3] <==
	{"level":"warn","ts":"2025-10-18T17:12:57.118566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.132140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.145853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.176193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.186037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.203465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.226225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.242513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.253116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.278512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.290001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.306618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.328529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.343996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.361991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.385863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.429145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.447507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:12:57.498551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:13:13.554196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:13:13.587244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:13:35.295316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:13:35.329517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:13:35.345028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:13:35.359900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41326","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [465d642f21c7ad346b0ec9f2b4225bbe6eb67e9bbd2b751784c0bde16c473589] <==
	2025/10/18 17:14:45 GCP Auth Webhook started!
	2025/10/18 17:14:59 Ready to marshal response ...
	2025/10/18 17:14:59 Ready to write response ...
	2025/10/18 17:15:00 Ready to marshal response ...
	2025/10/18 17:15:00 Ready to write response ...
	2025/10/18 17:15:00 Ready to marshal response ...
	2025/10/18 17:15:00 Ready to write response ...
	
	
	==> kernel <==
	 17:15:11 up 57 min,  0 user,  load average: 1.81, 1.04, 0.43
	Linux addons-164474 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d12b84a60111629c5268442b96bd59e440c9aec3f86f326d9528b07daa476596] <==
	E1018 17:13:37.206493       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 17:13:37.207493       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 17:13:37.207503       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1018 17:13:38.807126       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 17:13:38.807176       1 metrics.go:72] Registering metrics
	I1018 17:13:38.807231       1 controller.go:711] "Syncing nftables rules"
	E1018 17:13:38.807601       1 controller.go:417] "reading nfqueue stats" err="open /proc/net/netfilter/nfnetlink_queue: no such file or directory"
	I1018 17:13:47.208915       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:13:47.209002       1 main.go:301] handling current node
	I1018 17:13:57.205614       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:13:57.205683       1 main.go:301] handling current node
	I1018 17:14:07.205766       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:14:07.205840       1 main.go:301] handling current node
	I1018 17:14:17.206623       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:14:17.206678       1 main.go:301] handling current node
	I1018 17:14:27.206617       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:14:27.206652       1 main.go:301] handling current node
	I1018 17:14:37.205941       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:14:37.205978       1 main.go:301] handling current node
	I1018 17:14:47.206319       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:14:47.206353       1 main.go:301] handling current node
	I1018 17:14:57.205853       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:14:57.205886       1 main.go:301] handling current node
	I1018 17:15:07.206403       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:15:07.206441       1 main.go:301] handling current node
	
	
	==> kube-apiserver [07f016f168b62771c5ab60ab8215041fcead58a20ef1da5932bcb8d6da58077f] <==
	E1018 17:13:54.622842       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.50.17:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.50.17:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.50.17:443: connect: connection refused" logger="UnhandledError"
	W1018 17:13:54.622900       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 17:13:54.622953       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1018 17:13:55.625052       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 17:13:55.625098       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1018 17:13:55.625111       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1018 17:13:55.625196       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 17:13:55.625266       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1018 17:13:55.626326       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1018 17:13:59.630026       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.50.17:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.50.17:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W1018 17:13:59.630363       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 17:13:59.630399       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1018 17:13:59.687979       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1018 17:13:59.711343       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1018 17:15:08.749229       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54228: use of closed network connection
	E1018 17:15:09.003407       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54258: use of closed network connection
	
	
	==> kube-controller-manager [4a1b92f8cd14a17c1e2790e1ca03a5608e43fb0ee84dba04aae2757215b8f043] <==
	I1018 17:13:05.308359       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-164474" podCIDRs=["10.244.0.0/24"]
	I1018 17:13:05.309657       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 17:13:05.309749       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 17:13:05.314159       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 17:13:05.314257       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 17:13:05.315436       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 17:13:05.315852       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 17:13:05.315919       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 17:13:05.315929       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 17:13:05.316088       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 17:13:05.316128       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 17:13:05.321023       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 17:13:05.321140       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 17:13:05.321494       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 17:13:05.321523       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 17:13:05.321560       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	E1018 17:13:10.833728       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1018 17:13:35.285763       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 17:13:35.285923       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1018 17:13:35.285966       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1018 17:13:35.317550       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1018 17:13:35.326637       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 17:13:35.386997       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 17:13:35.427080       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 17:13:50.318836       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d87115dc1b972147e18ebd00d21f7d791e5831c69fbef5f5e25fb2fade668bf7] <==
	I1018 17:13:07.237691       1 server_linux.go:53] "Using iptables proxy"
	I1018 17:13:07.354749       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 17:13:07.455784       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 17:13:07.455823       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 17:13:07.455908       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 17:13:07.499064       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 17:13:07.499142       1 server_linux.go:132] "Using iptables Proxier"
	I1018 17:13:07.510044       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 17:13:07.524225       1 server.go:527] "Version info" version="v1.34.1"
	I1018 17:13:07.524258       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 17:13:07.536647       1 config.go:200] "Starting service config controller"
	I1018 17:13:07.536667       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 17:13:07.536693       1 config.go:106] "Starting endpoint slice config controller"
	I1018 17:13:07.536697       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 17:13:07.536705       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 17:13:07.536708       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 17:13:07.539862       1 config.go:309] "Starting node config controller"
	I1018 17:13:07.539879       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 17:13:07.539886       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 17:13:07.637516       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 17:13:07.637547       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 17:13:07.637560       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [246aa3ddddf57033502d5fd5679ade1ae4e79cefdfdc7645841ea4f17a3e0313] <==
	E1018 17:12:58.370100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 17:12:58.370164       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 17:12:58.370217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 17:12:58.370233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 17:12:58.370299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 17:12:58.370308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 17:12:58.370373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 17:12:58.370406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 17:12:58.370485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 17:12:58.370544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 17:12:58.371096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 17:12:58.371191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 17:12:58.371296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 17:12:58.372034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 17:12:59.165179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 17:12:59.237421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 17:12:59.281284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 17:12:59.297283       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 17:12:59.453088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 17:12:59.499485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 17:12:59.524414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 17:12:59.536626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 17:12:59.540310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 17:12:59.552642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1018 17:13:01.921868       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 17:14:34 addons-164474 kubelet[1270]: I1018 17:14:34.802203    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-w6sqz" podStartSLOduration=3.604665935 podStartE2EDuration="47.802184937s" podCreationTimestamp="2025-10-18 17:13:47 +0000 UTC" firstStartedPulling="2025-10-18 17:13:49.615570542 +0000 UTC m=+48.554586668" lastFinishedPulling="2025-10-18 17:14:33.813089536 +0000 UTC m=+92.752105670" observedRunningTime="2025-10-18 17:14:34.80171979 +0000 UTC m=+93.740735956" watchObservedRunningTime="2025-10-18 17:14:34.802184937 +0000 UTC m=+93.741201071"
	Oct 18 17:14:35 addons-164474 kubelet[1270]: I1018 17:14:35.801476    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-w6sqz" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 17:14:36 addons-164474 kubelet[1270]: I1018 17:14:36.448266    1270 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lhb9\" (UniqueName: \"kubernetes.io/projected/4d0b0d2a-fd8d-4a37-a78e-33561a28d8c6-kube-api-access-8lhb9\") pod \"4d0b0d2a-fd8d-4a37-a78e-33561a28d8c6\" (UID: \"4d0b0d2a-fd8d-4a37-a78e-33561a28d8c6\") "
	Oct 18 17:14:36 addons-164474 kubelet[1270]: I1018 17:14:36.451002    1270 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d0b0d2a-fd8d-4a37-a78e-33561a28d8c6-kube-api-access-8lhb9" (OuterVolumeSpecName: "kube-api-access-8lhb9") pod "4d0b0d2a-fd8d-4a37-a78e-33561a28d8c6" (UID: "4d0b0d2a-fd8d-4a37-a78e-33561a28d8c6"). InnerVolumeSpecName "kube-api-access-8lhb9". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 18 17:14:36 addons-164474 kubelet[1270]: I1018 17:14:36.548798    1270 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8lhb9\" (UniqueName: \"kubernetes.io/projected/4d0b0d2a-fd8d-4a37-a78e-33561a28d8c6-kube-api-access-8lhb9\") on node \"addons-164474\" DevicePath \"\""
	Oct 18 17:14:36 addons-164474 kubelet[1270]: I1018 17:14:36.805475    1270 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddca86d7dcb0e146c945538e81e5944ee186693edfe668791b061af44cebacda"
	Oct 18 17:14:37 addons-164474 kubelet[1270]: I1018 17:14:37.186336    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7830ed8-e65c-4a4c-ab71-c088d2bf4426" path="/var/lib/kubelet/pods/a7830ed8-e65c-4a4c-ab71-c088d2bf4426/volumes"
	Oct 18 17:14:37 addons-164474 kubelet[1270]: I1018 17:14:37.817146    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-6x6dm" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 17:14:38 addons-164474 kubelet[1270]: I1018 17:14:38.819164    1270 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-6x6dm" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 17:14:41 addons-164474 kubelet[1270]: I1018 17:14:41.863797    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-6x6dm" podStartSLOduration=7.227348619 podStartE2EDuration="54.863756646s" podCreationTimestamp="2025-10-18 17:13:47 +0000 UTC" firstStartedPulling="2025-10-18 17:13:49.710801347 +0000 UTC m=+48.649817472" lastFinishedPulling="2025-10-18 17:14:37.347209365 +0000 UTC m=+96.286225499" observedRunningTime="2025-10-18 17:14:37.842530972 +0000 UTC m=+96.781547098" watchObservedRunningTime="2025-10-18 17:14:41.863756646 +0000 UTC m=+100.802772781"
	Oct 18 17:14:45 addons-164474 kubelet[1270]: I1018 17:14:45.900539    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-sh2jw" podStartSLOduration=69.588221975 podStartE2EDuration="1m33.900517538s" podCreationTimestamp="2025-10-18 17:13:12 +0000 UTC" firstStartedPulling="2025-10-18 17:14:16.632448656 +0000 UTC m=+75.571464781" lastFinishedPulling="2025-10-18 17:14:40.944744185 +0000 UTC m=+99.883760344" observedRunningTime="2025-10-18 17:14:41.863333912 +0000 UTC m=+100.802350063" watchObservedRunningTime="2025-10-18 17:14:45.900517538 +0000 UTC m=+104.839533672"
	Oct 18 17:14:46 addons-164474 kubelet[1270]: I1018 17:14:46.624335    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-4hmpz" podStartSLOduration=65.743977133 podStartE2EDuration="1m30.624317952s" podCreationTimestamp="2025-10-18 17:13:16 +0000 UTC" firstStartedPulling="2025-10-18 17:14:20.503333947 +0000 UTC m=+79.442350072" lastFinishedPulling="2025-10-18 17:14:45.383674765 +0000 UTC m=+104.322690891" observedRunningTime="2025-10-18 17:14:45.902643861 +0000 UTC m=+104.841660020" watchObservedRunningTime="2025-10-18 17:14:46.624317952 +0000 UTC m=+105.563334078"
	Oct 18 17:14:51 addons-164474 kubelet[1270]: E1018 17:14:51.697693    1270 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 18 17:14:51 addons-164474 kubelet[1270]: E1018 17:14:51.697786    1270 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66a2f897-d4c3-4ebf-a15a-51183d31deaa-gcr-creds podName:66a2f897-d4c3-4ebf-a15a-51183d31deaa nodeName:}" failed. No retries permitted until 2025-10-18 17:15:55.697767686 +0000 UTC m=+174.636783820 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/66a2f897-d4c3-4ebf-a15a-51183d31deaa-gcr-creds") pod "registry-creds-764b6fb674-k267j" (UID: "66a2f897-d4c3-4ebf-a15a-51183d31deaa") : secret "registry-creds-gcr" not found
	Oct 18 17:14:53 addons-164474 kubelet[1270]: I1018 17:14:53.424206    1270 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 18 17:14:53 addons-164474 kubelet[1270]: I1018 17:14:53.424850    1270 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 18 17:14:56 addons-164474 kubelet[1270]: I1018 17:14:56.967539    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-9vsqk" podStartSLOduration=74.060280957 podStartE2EDuration="1m44.967513124s" podCreationTimestamp="2025-10-18 17:13:12 +0000 UTC" firstStartedPulling="2025-10-18 17:14:20.576558324 +0000 UTC m=+79.515574450" lastFinishedPulling="2025-10-18 17:14:51.483790491 +0000 UTC m=+110.422806617" observedRunningTime="2025-10-18 17:14:51.937307635 +0000 UTC m=+110.876323777" watchObservedRunningTime="2025-10-18 17:14:56.967513124 +0000 UTC m=+115.906529250"
	Oct 18 17:15:00 addons-164474 kubelet[1270]: I1018 17:15:00.266037    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-9l87p" podStartSLOduration=5.155656519 podStartE2EDuration="1m13.266013511s" podCreationTimestamp="2025-10-18 17:13:47 +0000 UTC" firstStartedPulling="2025-10-18 17:13:48.749399593 +0000 UTC m=+47.688415719" lastFinishedPulling="2025-10-18 17:14:56.859756577 +0000 UTC m=+115.798772711" observedRunningTime="2025-10-18 17:14:56.969577121 +0000 UTC m=+115.908593255" watchObservedRunningTime="2025-10-18 17:15:00.266013511 +0000 UTC m=+119.205029637"
	Oct 18 17:15:00 addons-164474 kubelet[1270]: I1018 17:15:00.406763    1270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a212ae5b-eb4f-4f94-a0e8-d10307a75f8b-gcp-creds\") pod \"busybox\" (UID: \"a212ae5b-eb4f-4f94-a0e8-d10307a75f8b\") " pod="default/busybox"
	Oct 18 17:15:00 addons-164474 kubelet[1270]: I1018 17:15:00.424180    1270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpfph\" (UniqueName: \"kubernetes.io/projected/a212ae5b-eb4f-4f94-a0e8-d10307a75f8b-kube-api-access-zpfph\") pod \"busybox\" (UID: \"a212ae5b-eb4f-4f94-a0e8-d10307a75f8b\") " pod="default/busybox"
	Oct 18 17:15:00 addons-164474 kubelet[1270]: W1018 17:15:00.626574    1270 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/31000ccc16f2da54474476b9a5eeb51132587beec766c8579e875c01b1c476ea/crio-1281c408b40f382847f6b49db2b696d162aaf1c836a24120ff17bca35ffec910 WatchSource:0}: Error finding container 1281c408b40f382847f6b49db2b696d162aaf1c836a24120ff17bca35ffec910: Status 404 returned error can't find the container with id 1281c408b40f382847f6b49db2b696d162aaf1c836a24120ff17bca35ffec910
	Oct 18 17:15:01 addons-164474 kubelet[1270]: I1018 17:15:01.243650    1270 scope.go:117] "RemoveContainer" containerID="998e9336ca2259de54c8d65925937629f786cff34787fb44f4f11aaf16d2e104"
	Oct 18 17:15:01 addons-164474 kubelet[1270]: E1018 17:15:01.368020    1270 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/cc55efa650dfc419b810e34fd077fad8843e5914523ebe1bf6d6c0c1f61b7f47/diff" to get inode usage: stat /var/lib/containers/storage/overlay/cc55efa650dfc419b810e34fd077fad8843e5914523ebe1bf6d6c0c1f61b7f47/diff: no such file or directory, extraDiskErr: <nil>
	Oct 18 17:15:02 addons-164474 kubelet[1270]: I1018 17:15:02.980158    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.886961092 podStartE2EDuration="2.980137518s" podCreationTimestamp="2025-10-18 17:15:00 +0000 UTC" firstStartedPulling="2025-10-18 17:15:00.632796731 +0000 UTC m=+119.571812857" lastFinishedPulling="2025-10-18 17:15:02.725973166 +0000 UTC m=+121.664989283" observedRunningTime="2025-10-18 17:15:02.979360342 +0000 UTC m=+121.918376476" watchObservedRunningTime="2025-10-18 17:15:02.980137518 +0000 UTC m=+121.919153652"
	Oct 18 17:15:07 addons-164474 kubelet[1270]: I1018 17:15:07.186753    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d0b0d2a-fd8d-4a37-a78e-33561a28d8c6" path="/var/lib/kubelet/pods/4d0b0d2a-fd8d-4a37-a78e-33561a28d8c6/volumes"
	
	
	==> storage-provisioner [8d07fa8a1c45fb2b7f3f20b332023c1b057391cd1e4435eb47db001464e9ada7] <==
	W1018 17:14:47.110253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:14:49.114069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:14:49.122776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:14:51.125864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:14:51.131468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:14:53.134873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:14:53.139721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:14:55.143320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:14:55.150434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:14:57.153048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:14:57.157625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:14:59.164544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:14:59.176871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:15:01.179672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:15:01.185675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:15:03.189014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:15:03.193282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:15:05.196580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:15:05.203570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:15:07.208098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:15:07.213324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:15:09.217450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:15:09.223201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:15:11.226778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:15:11.232135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-164474 -n addons-164474
helpers_test.go:269: (dbg) Run:  kubectl --context addons-164474 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-9qw6v ingress-nginx-admission-patch-xdpbv registry-creds-764b6fb674-k267j
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-164474 describe pod ingress-nginx-admission-create-9qw6v ingress-nginx-admission-patch-xdpbv registry-creds-764b6fb674-k267j
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-164474 describe pod ingress-nginx-admission-create-9qw6v ingress-nginx-admission-patch-xdpbv registry-creds-764b6fb674-k267j: exit status 1 (90.616139ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-9qw6v" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xdpbv" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-k267j" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-164474 describe pod ingress-nginx-admission-create-9qw6v ingress-nginx-admission-patch-xdpbv registry-creds-764b6fb674-k267j: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-164474 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-164474 addons disable headlamp --alsologtostderr -v=1: exit status 11 (266.016909ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 17:15:12.370106   11723 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:15:12.370386   11723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:15:12.370399   11723 out.go:374] Setting ErrFile to fd 2...
	I1018 17:15:12.370404   11723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:15:12.370732   11723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:15:12.371092   11723 mustload.go:65] Loading cluster: addons-164474
	I1018 17:15:12.371575   11723 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:15:12.371594   11723 addons.go:606] checking whether the cluster is paused
	I1018 17:15:12.371739   11723 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:15:12.371775   11723 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:15:12.372317   11723 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:15:12.390019   11723 ssh_runner.go:195] Run: systemctl --version
	I1018 17:15:12.390081   11723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:15:12.412085   11723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:15:12.519588   11723 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:15:12.519677   11723 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:15:12.548857   11723 cri.go:89] found id: "968c95a146a7fe08d5189eee29bd9582f2894b5d0f04e78e794058e86e194f17"
	I1018 17:15:12.548876   11723 cri.go:89] found id: "7657f768a8a9ac41bcd1f5e7a196579a7dcf31f08b605bba0bd11acb46369892"
	I1018 17:15:12.548881   11723 cri.go:89] found id: "a4777ba56bbe130ce2d0759f981f7a5a7a81a6f76b26c9602759d75786f28075"
	I1018 17:15:12.548885   11723 cri.go:89] found id: "cdf72845ca4f04b7f38a96e8e2bc2c5bff55db097097fe86438572754061e4d1"
	I1018 17:15:12.548888   11723 cri.go:89] found id: "cbf41849e12c028d15eee86acc3c0fcaf5d31af35d656b7935de4a45730fb182"
	I1018 17:15:12.548891   11723 cri.go:89] found id: "cd1c762de0b5dd26a00d004eb60c3a0356920d2d898bf210120e83239de379d3"
	I1018 17:15:12.548895   11723 cri.go:89] found id: "f97b941babec4dfdf104ffdbe7459e396a64a17a6edfa11989d9170c5b5365e2"
	I1018 17:15:12.548898   11723 cri.go:89] found id: "c763b99ed4a70e785446e888023cdfabc0fdeb6e7dcb1a84844d98d22b841291"
	I1018 17:15:12.548901   11723 cri.go:89] found id: "26297b4bb562054967554961013b4aecf4a819a64b9615266425ddb33797d349"
	I1018 17:15:12.548908   11723 cri.go:89] found id: "14f2f76f82dc964b8b157e088100913e80feaa2be642ecc8b72fea78bd2a0ed1"
	I1018 17:15:12.548911   11723 cri.go:89] found id: "901f9bb2898fac636a6903ad516f9b140591198721e4e2bfd30c9ab9155a01ed"
	I1018 17:15:12.548914   11723 cri.go:89] found id: "8a59f8ac6ef2822e7088c9cd1a68272c147739f96eaf27abf4a85d43c140b0ea"
	I1018 17:15:12.548917   11723 cri.go:89] found id: "ce684ca523f08f4af3d1134e239085b099cc9e2cd0f8679963ba4f111fcf7567"
	I1018 17:15:12.548920   11723 cri.go:89] found id: "f402fe3063f55e7003a2aaac453c55c6b2139f8fa75d1a062b447a4a5a8f278c"
	I1018 17:15:12.548924   11723 cri.go:89] found id: "6865806b912ab6d902d766fb60959288c01cc7c01f0f6d41ece13a1484e43f45"
	I1018 17:15:12.548964   11723 cri.go:89] found id: "8d07fa8a1c45fb2b7f3f20b332023c1b057391cd1e4435eb47db001464e9ada7"
	I1018 17:15:12.548969   11723 cri.go:89] found id: "ece6fd8e36b7414b9ea8a96fa9d85543498f89e17705fa3bc262b1570f482b24"
	I1018 17:15:12.548974   11723 cri.go:89] found id: "d12b84a60111629c5268442b96bd59e440c9aec3f86f326d9528b07daa476596"
	I1018 17:15:12.548977   11723 cri.go:89] found id: "d87115dc1b972147e18ebd00d21f7d791e5831c69fbef5f5e25fb2fade668bf7"
	I1018 17:15:12.548981   11723 cri.go:89] found id: "07f016f168b62771c5ab60ab8215041fcead58a20ef1da5932bcb8d6da58077f"
	I1018 17:15:12.548985   11723 cri.go:89] found id: "f085bccd65219cd8bb8d59ffcc8bee71589bead44d17e3e6fe5269fe6781f2f3"
	I1018 17:15:12.548989   11723 cri.go:89] found id: "4a1b92f8cd14a17c1e2790e1ca03a5608e43fb0ee84dba04aae2757215b8f043"
	I1018 17:15:12.548992   11723 cri.go:89] found id: "246aa3ddddf57033502d5fd5679ade1ae4e79cefdfdc7645841ea4f17a3e0313"
	I1018 17:15:12.548995   11723 cri.go:89] found id: ""
	I1018 17:15:12.549044   11723 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 17:15:12.564433   11723 out.go:203] 
	W1018 17:15:12.567466   11723 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:15:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:15:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 17:15:12.567501   11723 out.go:285] * 
	* 
	W1018 17:15:12.571799   11723 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 17:15:12.574786   11723 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-164474 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.16s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-gwtv9" [ba6125b7-8dcf-4967-958b-4139c89d6e3e] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003610482s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-164474 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-164474 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (293.78586ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 17:15:29.823106   12195 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:15:29.823361   12195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:15:29.823396   12195 out.go:374] Setting ErrFile to fd 2...
	I1018 17:15:29.823418   12195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:15:29.823706   12195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:15:29.824010   12195 mustload.go:65] Loading cluster: addons-164474
	I1018 17:15:29.824418   12195 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:15:29.824463   12195 addons.go:606] checking whether the cluster is paused
	I1018 17:15:29.824593   12195 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:15:29.824635   12195 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:15:29.825275   12195 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:15:29.842965   12195 ssh_runner.go:195] Run: systemctl --version
	I1018 17:15:29.843018   12195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:15:29.861175   12195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:15:29.967876   12195 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:15:29.967994   12195 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:15:30.029349   12195 cri.go:89] found id: "968c95a146a7fe08d5189eee29bd9582f2894b5d0f04e78e794058e86e194f17"
	I1018 17:15:30.029373   12195 cri.go:89] found id: "7657f768a8a9ac41bcd1f5e7a196579a7dcf31f08b605bba0bd11acb46369892"
	I1018 17:15:30.029378   12195 cri.go:89] found id: "a4777ba56bbe130ce2d0759f981f7a5a7a81a6f76b26c9602759d75786f28075"
	I1018 17:15:30.029382   12195 cri.go:89] found id: "cdf72845ca4f04b7f38a96e8e2bc2c5bff55db097097fe86438572754061e4d1"
	I1018 17:15:30.029385   12195 cri.go:89] found id: "cbf41849e12c028d15eee86acc3c0fcaf5d31af35d656b7935de4a45730fb182"
	I1018 17:15:30.029389   12195 cri.go:89] found id: "cd1c762de0b5dd26a00d004eb60c3a0356920d2d898bf210120e83239de379d3"
	I1018 17:15:30.029392   12195 cri.go:89] found id: "f97b941babec4dfdf104ffdbe7459e396a64a17a6edfa11989d9170c5b5365e2"
	I1018 17:15:30.029396   12195 cri.go:89] found id: "c763b99ed4a70e785446e888023cdfabc0fdeb6e7dcb1a84844d98d22b841291"
	I1018 17:15:30.029399   12195 cri.go:89] found id: "26297b4bb562054967554961013b4aecf4a819a64b9615266425ddb33797d349"
	I1018 17:15:30.029408   12195 cri.go:89] found id: "14f2f76f82dc964b8b157e088100913e80feaa2be642ecc8b72fea78bd2a0ed1"
	I1018 17:15:30.029412   12195 cri.go:89] found id: "901f9bb2898fac636a6903ad516f9b140591198721e4e2bfd30c9ab9155a01ed"
	I1018 17:15:30.029415   12195 cri.go:89] found id: "8a59f8ac6ef2822e7088c9cd1a68272c147739f96eaf27abf4a85d43c140b0ea"
	I1018 17:15:30.029418   12195 cri.go:89] found id: "ce684ca523f08f4af3d1134e239085b099cc9e2cd0f8679963ba4f111fcf7567"
	I1018 17:15:30.029421   12195 cri.go:89] found id: "f402fe3063f55e7003a2aaac453c55c6b2139f8fa75d1a062b447a4a5a8f278c"
	I1018 17:15:30.029424   12195 cri.go:89] found id: "6865806b912ab6d902d766fb60959288c01cc7c01f0f6d41ece13a1484e43f45"
	I1018 17:15:30.029430   12195 cri.go:89] found id: "8d07fa8a1c45fb2b7f3f20b332023c1b057391cd1e4435eb47db001464e9ada7"
	I1018 17:15:30.029434   12195 cri.go:89] found id: "ece6fd8e36b7414b9ea8a96fa9d85543498f89e17705fa3bc262b1570f482b24"
	I1018 17:15:30.029439   12195 cri.go:89] found id: "d12b84a60111629c5268442b96bd59e440c9aec3f86f326d9528b07daa476596"
	I1018 17:15:30.029442   12195 cri.go:89] found id: "d87115dc1b972147e18ebd00d21f7d791e5831c69fbef5f5e25fb2fade668bf7"
	I1018 17:15:30.029445   12195 cri.go:89] found id: "07f016f168b62771c5ab60ab8215041fcead58a20ef1da5932bcb8d6da58077f"
	I1018 17:15:30.029452   12195 cri.go:89] found id: "f085bccd65219cd8bb8d59ffcc8bee71589bead44d17e3e6fe5269fe6781f2f3"
	I1018 17:15:30.029455   12195 cri.go:89] found id: "4a1b92f8cd14a17c1e2790e1ca03a5608e43fb0ee84dba04aae2757215b8f043"
	I1018 17:15:30.029458   12195 cri.go:89] found id: "246aa3ddddf57033502d5fd5679ade1ae4e79cefdfdc7645841ea4f17a3e0313"
	I1018 17:15:30.029461   12195 cri.go:89] found id: ""
	I1018 17:15:30.029522   12195 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 17:15:30.056529   12195 out.go:203] 
	W1018 17:15:30.059700   12195 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:15:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:15:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 17:15:30.059735   12195 out.go:285] * 
	* 
	W1018 17:15:30.064122   12195 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 17:15:30.067262   12195 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-164474 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.30s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.57s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-164474 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-164474 apply -f testdata/storage-provisioner-rancher/pod.yaml
2025/10/18 17:15:24 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164474 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [f43068b7-4983-426a-9eff-5612461d44e5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [f43068b7-4983-426a-9eff-5612461d44e5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [f43068b7-4983-426a-9eff-5612461d44e5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003315272s
addons_test.go:967: (dbg) Run:  kubectl --context addons-164474 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-164474 ssh "cat /opt/local-path-provisioner/pvc-ae21b147-3096-4e56-ade2-459d3d01d96a_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-164474 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-164474 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-164474 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-164474 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (270.251056ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 17:15:32.464329   12346 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:15:32.464586   12346 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:15:32.464617   12346 out.go:374] Setting ErrFile to fd 2...
	I1018 17:15:32.464637   12346 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:15:32.465106   12346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:15:32.465605   12346 mustload.go:65] Loading cluster: addons-164474
	I1018 17:15:32.466298   12346 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:15:32.466338   12346 addons.go:606] checking whether the cluster is paused
	I1018 17:15:32.466508   12346 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:15:32.466548   12346 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:15:32.467671   12346 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:15:32.487153   12346 ssh_runner.go:195] Run: systemctl --version
	I1018 17:15:32.487206   12346 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:15:32.505453   12346 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:15:32.615607   12346 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:15:32.615704   12346 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:15:32.646125   12346 cri.go:89] found id: "968c95a146a7fe08d5189eee29bd9582f2894b5d0f04e78e794058e86e194f17"
	I1018 17:15:32.646194   12346 cri.go:89] found id: "7657f768a8a9ac41bcd1f5e7a196579a7dcf31f08b605bba0bd11acb46369892"
	I1018 17:15:32.646212   12346 cri.go:89] found id: "a4777ba56bbe130ce2d0759f981f7a5a7a81a6f76b26c9602759d75786f28075"
	I1018 17:15:32.646231   12346 cri.go:89] found id: "cdf72845ca4f04b7f38a96e8e2bc2c5bff55db097097fe86438572754061e4d1"
	I1018 17:15:32.646248   12346 cri.go:89] found id: "cbf41849e12c028d15eee86acc3c0fcaf5d31af35d656b7935de4a45730fb182"
	I1018 17:15:32.646289   12346 cri.go:89] found id: "cd1c762de0b5dd26a00d004eb60c3a0356920d2d898bf210120e83239de379d3"
	I1018 17:15:32.646306   12346 cri.go:89] found id: "f97b941babec4dfdf104ffdbe7459e396a64a17a6edfa11989d9170c5b5365e2"
	I1018 17:15:32.646325   12346 cri.go:89] found id: "c763b99ed4a70e785446e888023cdfabc0fdeb6e7dcb1a84844d98d22b841291"
	I1018 17:15:32.646356   12346 cri.go:89] found id: "26297b4bb562054967554961013b4aecf4a819a64b9615266425ddb33797d349"
	I1018 17:15:32.646383   12346 cri.go:89] found id: "14f2f76f82dc964b8b157e088100913e80feaa2be642ecc8b72fea78bd2a0ed1"
	I1018 17:15:32.646401   12346 cri.go:89] found id: "901f9bb2898fac636a6903ad516f9b140591198721e4e2bfd30c9ab9155a01ed"
	I1018 17:15:32.646420   12346 cri.go:89] found id: "8a59f8ac6ef2822e7088c9cd1a68272c147739f96eaf27abf4a85d43c140b0ea"
	I1018 17:15:32.646455   12346 cri.go:89] found id: "ce684ca523f08f4af3d1134e239085b099cc9e2cd0f8679963ba4f111fcf7567"
	I1018 17:15:32.646473   12346 cri.go:89] found id: "f402fe3063f55e7003a2aaac453c55c6b2139f8fa75d1a062b447a4a5a8f278c"
	I1018 17:15:32.646490   12346 cri.go:89] found id: "6865806b912ab6d902d766fb60959288c01cc7c01f0f6d41ece13a1484e43f45"
	I1018 17:15:32.646522   12346 cri.go:89] found id: "8d07fa8a1c45fb2b7f3f20b332023c1b057391cd1e4435eb47db001464e9ada7"
	I1018 17:15:32.646554   12346 cri.go:89] found id: "ece6fd8e36b7414b9ea8a96fa9d85543498f89e17705fa3bc262b1570f482b24"
	I1018 17:15:32.646575   12346 cri.go:89] found id: "d12b84a60111629c5268442b96bd59e440c9aec3f86f326d9528b07daa476596"
	I1018 17:15:32.646614   12346 cri.go:89] found id: "d87115dc1b972147e18ebd00d21f7d791e5831c69fbef5f5e25fb2fade668bf7"
	I1018 17:15:32.646636   12346 cri.go:89] found id: "07f016f168b62771c5ab60ab8215041fcead58a20ef1da5932bcb8d6da58077f"
	I1018 17:15:32.646657   12346 cri.go:89] found id: "f085bccd65219cd8bb8d59ffcc8bee71589bead44d17e3e6fe5269fe6781f2f3"
	I1018 17:15:32.646689   12346 cri.go:89] found id: "4a1b92f8cd14a17c1e2790e1ca03a5608e43fb0ee84dba04aae2757215b8f043"
	I1018 17:15:32.646710   12346 cri.go:89] found id: "246aa3ddddf57033502d5fd5679ade1ae4e79cefdfdc7645841ea4f17a3e0313"
	I1018 17:15:32.646735   12346 cri.go:89] found id: ""
	I1018 17:15:32.646824   12346 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 17:15:32.661943   12346 out.go:203] 
	W1018 17:15:32.665235   12346 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:15:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:15:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 17:15:32.665261   12346 out.go:285] * 
	* 
	W1018 17:15:32.669653   12346 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 17:15:32.672980   12346 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-164474 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.57s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-w6sqz" [ef275008-60c3-4bde-a747-35f70a06cb02] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003723351s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-164474 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-164474 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (268.708174ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 17:15:23.885996   11901 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:15:23.886231   11901 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:15:23.886266   11901 out.go:374] Setting ErrFile to fd 2...
	I1018 17:15:23.886287   11901 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:15:23.886544   11901 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:15:23.886862   11901 mustload.go:65] Loading cluster: addons-164474
	I1018 17:15:23.887254   11901 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:15:23.887299   11901 addons.go:606] checking whether the cluster is paused
	I1018 17:15:23.887439   11901 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:15:23.887493   11901 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:15:23.887958   11901 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:15:23.906460   11901 ssh_runner.go:195] Run: systemctl --version
	I1018 17:15:23.906529   11901 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:15:23.924762   11901 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:15:24.027641   11901 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:15:24.027770   11901 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:15:24.066570   11901 cri.go:89] found id: "968c95a146a7fe08d5189eee29bd9582f2894b5d0f04e78e794058e86e194f17"
	I1018 17:15:24.066639   11901 cri.go:89] found id: "7657f768a8a9ac41bcd1f5e7a196579a7dcf31f08b605bba0bd11acb46369892"
	I1018 17:15:24.066678   11901 cri.go:89] found id: "a4777ba56bbe130ce2d0759f981f7a5a7a81a6f76b26c9602759d75786f28075"
	I1018 17:15:24.066728   11901 cri.go:89] found id: "cdf72845ca4f04b7f38a96e8e2bc2c5bff55db097097fe86438572754061e4d1"
	I1018 17:15:24.066757   11901 cri.go:89] found id: "cbf41849e12c028d15eee86acc3c0fcaf5d31af35d656b7935de4a45730fb182"
	I1018 17:15:24.066776   11901 cri.go:89] found id: "cd1c762de0b5dd26a00d004eb60c3a0356920d2d898bf210120e83239de379d3"
	I1018 17:15:24.066810   11901 cri.go:89] found id: "f97b941babec4dfdf104ffdbe7459e396a64a17a6edfa11989d9170c5b5365e2"
	I1018 17:15:24.066830   11901 cri.go:89] found id: "c763b99ed4a70e785446e888023cdfabc0fdeb6e7dcb1a84844d98d22b841291"
	I1018 17:15:24.066848   11901 cri.go:89] found id: "26297b4bb562054967554961013b4aecf4a819a64b9615266425ddb33797d349"
	I1018 17:15:24.066872   11901 cri.go:89] found id: "14f2f76f82dc964b8b157e088100913e80feaa2be642ecc8b72fea78bd2a0ed1"
	I1018 17:15:24.066905   11901 cri.go:89] found id: "901f9bb2898fac636a6903ad516f9b140591198721e4e2bfd30c9ab9155a01ed"
	I1018 17:15:24.066923   11901 cri.go:89] found id: "8a59f8ac6ef2822e7088c9cd1a68272c147739f96eaf27abf4a85d43c140b0ea"
	I1018 17:15:24.066942   11901 cri.go:89] found id: "ce684ca523f08f4af3d1134e239085b099cc9e2cd0f8679963ba4f111fcf7567"
	I1018 17:15:24.066974   11901 cri.go:89] found id: "f402fe3063f55e7003a2aaac453c55c6b2139f8fa75d1a062b447a4a5a8f278c"
	I1018 17:15:24.066998   11901 cri.go:89] found id: "6865806b912ab6d902d766fb60959288c01cc7c01f0f6d41ece13a1484e43f45"
	I1018 17:15:24.067020   11901 cri.go:89] found id: "8d07fa8a1c45fb2b7f3f20b332023c1b057391cd1e4435eb47db001464e9ada7"
	I1018 17:15:24.067072   11901 cri.go:89] found id: "ece6fd8e36b7414b9ea8a96fa9d85543498f89e17705fa3bc262b1570f482b24"
	I1018 17:15:24.067095   11901 cri.go:89] found id: "d12b84a60111629c5268442b96bd59e440c9aec3f86f326d9528b07daa476596"
	I1018 17:15:24.067114   11901 cri.go:89] found id: "d87115dc1b972147e18ebd00d21f7d791e5831c69fbef5f5e25fb2fade668bf7"
	I1018 17:15:24.067132   11901 cri.go:89] found id: "07f016f168b62771c5ab60ab8215041fcead58a20ef1da5932bcb8d6da58077f"
	I1018 17:15:24.067164   11901 cri.go:89] found id: "f085bccd65219cd8bb8d59ffcc8bee71589bead44d17e3e6fe5269fe6781f2f3"
	I1018 17:15:24.067192   11901 cri.go:89] found id: "4a1b92f8cd14a17c1e2790e1ca03a5608e43fb0ee84dba04aae2757215b8f043"
	I1018 17:15:24.067213   11901 cri.go:89] found id: "246aa3ddddf57033502d5fd5679ade1ae4e79cefdfdc7645841ea4f17a3e0313"
	I1018 17:15:24.067249   11901 cri.go:89] found id: ""
	I1018 17:15:24.067347   11901 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 17:15:24.095003   11901 out.go:203] 
	W1018 17:15:24.098010   11901 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:15:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:15:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 17:15:24.098093   11901 out.go:285] * 
	* 
	W1018 17:15:24.102455   11901 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 17:15:24.106390   11901 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-164474 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-v54wx" [9eac8687-a97b-4657-875a-2f8199ce039a] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003634117s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-164474 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-164474 addons disable yakd --alsologtostderr -v=1: exit status 11 (255.438038ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 17:15:17.636546   11784 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:15:17.636792   11784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:15:17.636806   11784 out.go:374] Setting ErrFile to fd 2...
	I1018 17:15:17.636811   11784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:15:17.637112   11784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:15:17.637445   11784 mustload.go:65] Loading cluster: addons-164474
	I1018 17:15:17.637808   11784 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:15:17.637825   11784 addons.go:606] checking whether the cluster is paused
	I1018 17:15:17.637925   11784 config.go:182] Loaded profile config "addons-164474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:15:17.637939   11784 host.go:66] Checking if "addons-164474" exists ...
	I1018 17:15:17.638391   11784 cli_runner.go:164] Run: docker container inspect addons-164474 --format={{.State.Status}}
	I1018 17:15:17.657514   11784 ssh_runner.go:195] Run: systemctl --version
	I1018 17:15:17.657604   11784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164474
	I1018 17:15:17.674560   11784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/addons-164474/id_rsa Username:docker}
	I1018 17:15:17.775709   11784 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:15:17.775796   11784 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:15:17.805127   11784 cri.go:89] found id: "968c95a146a7fe08d5189eee29bd9582f2894b5d0f04e78e794058e86e194f17"
	I1018 17:15:17.805148   11784 cri.go:89] found id: "7657f768a8a9ac41bcd1f5e7a196579a7dcf31f08b605bba0bd11acb46369892"
	I1018 17:15:17.805153   11784 cri.go:89] found id: "a4777ba56bbe130ce2d0759f981f7a5a7a81a6f76b26c9602759d75786f28075"
	I1018 17:15:17.805157   11784 cri.go:89] found id: "cdf72845ca4f04b7f38a96e8e2bc2c5bff55db097097fe86438572754061e4d1"
	I1018 17:15:17.805160   11784 cri.go:89] found id: "cbf41849e12c028d15eee86acc3c0fcaf5d31af35d656b7935de4a45730fb182"
	I1018 17:15:17.805164   11784 cri.go:89] found id: "cd1c762de0b5dd26a00d004eb60c3a0356920d2d898bf210120e83239de379d3"
	I1018 17:15:17.805167   11784 cri.go:89] found id: "f97b941babec4dfdf104ffdbe7459e396a64a17a6edfa11989d9170c5b5365e2"
	I1018 17:15:17.805170   11784 cri.go:89] found id: "c763b99ed4a70e785446e888023cdfabc0fdeb6e7dcb1a84844d98d22b841291"
	I1018 17:15:17.805173   11784 cri.go:89] found id: "26297b4bb562054967554961013b4aecf4a819a64b9615266425ddb33797d349"
	I1018 17:15:17.805180   11784 cri.go:89] found id: "14f2f76f82dc964b8b157e088100913e80feaa2be642ecc8b72fea78bd2a0ed1"
	I1018 17:15:17.805183   11784 cri.go:89] found id: "901f9bb2898fac636a6903ad516f9b140591198721e4e2bfd30c9ab9155a01ed"
	I1018 17:15:17.805187   11784 cri.go:89] found id: "8a59f8ac6ef2822e7088c9cd1a68272c147739f96eaf27abf4a85d43c140b0ea"
	I1018 17:15:17.805190   11784 cri.go:89] found id: "ce684ca523f08f4af3d1134e239085b099cc9e2cd0f8679963ba4f111fcf7567"
	I1018 17:15:17.805193   11784 cri.go:89] found id: "f402fe3063f55e7003a2aaac453c55c6b2139f8fa75d1a062b447a4a5a8f278c"
	I1018 17:15:17.805200   11784 cri.go:89] found id: "6865806b912ab6d902d766fb60959288c01cc7c01f0f6d41ece13a1484e43f45"
	I1018 17:15:17.805208   11784 cri.go:89] found id: "8d07fa8a1c45fb2b7f3f20b332023c1b057391cd1e4435eb47db001464e9ada7"
	I1018 17:15:17.805215   11784 cri.go:89] found id: "ece6fd8e36b7414b9ea8a96fa9d85543498f89e17705fa3bc262b1570f482b24"
	I1018 17:15:17.805221   11784 cri.go:89] found id: "d12b84a60111629c5268442b96bd59e440c9aec3f86f326d9528b07daa476596"
	I1018 17:15:17.805224   11784 cri.go:89] found id: "d87115dc1b972147e18ebd00d21f7d791e5831c69fbef5f5e25fb2fade668bf7"
	I1018 17:15:17.805228   11784 cri.go:89] found id: "07f016f168b62771c5ab60ab8215041fcead58a20ef1da5932bcb8d6da58077f"
	I1018 17:15:17.805230   11784 cri.go:89] found id: "f085bccd65219cd8bb8d59ffcc8bee71589bead44d17e3e6fe5269fe6781f2f3"
	I1018 17:15:17.805234   11784 cri.go:89] found id: "4a1b92f8cd14a17c1e2790e1ca03a5608e43fb0ee84dba04aae2757215b8f043"
	I1018 17:15:17.805237   11784 cri.go:89] found id: "246aa3ddddf57033502d5fd5679ade1ae4e79cefdfdc7645841ea4f17a3e0313"
	I1018 17:15:17.805240   11784 cri.go:89] found id: ""
	I1018 17:15:17.805288   11784 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 17:15:17.822014   11784 out.go:203] 
	W1018 17:15:17.825074   11784 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:15:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:15:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 17:15:17.825105   11784 out.go:285] * 
	* 
	W1018 17:15:17.829368   11784 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 17:15:17.832295   11784 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-164474 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-306136 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-306136 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-vcjkz" [d3e81abb-4d3b-44d8-8c11-c5aef0ae8abe] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-306136 -n functional-306136
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-18 17:32:14.990367603 +0000 UTC m=+1221.288477608
functional_test.go:1645: (dbg) Run:  kubectl --context functional-306136 describe po hello-node-connect-7d85dfc575-vcjkz -n default
functional_test.go:1645: (dbg) kubectl --context functional-306136 describe po hello-node-connect-7d85dfc575-vcjkz -n default:
Name:             hello-node-connect-7d85dfc575-vcjkz
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-306136/192.168.49.2
Start Time:       Sat, 18 Oct 2025 17:22:14 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xct78 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-xct78:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-vcjkz to functional-306136
Normal   Pulling    6m52s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m52s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m52s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m51s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m36s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-306136 logs hello-node-connect-7d85dfc575-vcjkz -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-306136 logs hello-node-connect-7d85dfc575-vcjkz -n default: exit status 1 (112.009918ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-vcjkz" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-306136 logs hello-node-connect-7d85dfc575-vcjkz -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-306136 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-vcjkz
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-306136/192.168.49.2
Start Time:       Sat, 18 Oct 2025 17:22:14 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xct78 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-xct78:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-vcjkz to functional-306136
Normal   Pulling    6m52s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m52s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m52s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m51s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m36s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-306136 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-306136 logs -l app=hello-node-connect: exit status 1 (90.495051ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-vcjkz" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-306136 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-306136 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.109.225.195
IPs:                      10.109.225.195
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32002/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-306136
helpers_test.go:243: (dbg) docker inspect functional-306136:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f0f24c01d25db41e97ac7031bb0fcdc390130a572e51bd88d4aaeb7b45ec02b6",
	        "Created": "2025-10-18T17:19:19.770056526Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 19982,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T17:19:19.845123564Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/f0f24c01d25db41e97ac7031bb0fcdc390130a572e51bd88d4aaeb7b45ec02b6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f0f24c01d25db41e97ac7031bb0fcdc390130a572e51bd88d4aaeb7b45ec02b6/hostname",
	        "HostsPath": "/var/lib/docker/containers/f0f24c01d25db41e97ac7031bb0fcdc390130a572e51bd88d4aaeb7b45ec02b6/hosts",
	        "LogPath": "/var/lib/docker/containers/f0f24c01d25db41e97ac7031bb0fcdc390130a572e51bd88d4aaeb7b45ec02b6/f0f24c01d25db41e97ac7031bb0fcdc390130a572e51bd88d4aaeb7b45ec02b6-json.log",
	        "Name": "/functional-306136",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-306136:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-306136",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f0f24c01d25db41e97ac7031bb0fcdc390130a572e51bd88d4aaeb7b45ec02b6",
	                "LowerDir": "/var/lib/docker/overlay2/ad2c38bdf1a932ea090b8e3be6befb9e627e0d1d20c7d06b64865269eb40d809-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ad2c38bdf1a932ea090b8e3be6befb9e627e0d1d20c7d06b64865269eb40d809/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ad2c38bdf1a932ea090b8e3be6befb9e627e0d1d20c7d06b64865269eb40d809/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ad2c38bdf1a932ea090b8e3be6befb9e627e0d1d20c7d06b64865269eb40d809/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-306136",
	                "Source": "/var/lib/docker/volumes/functional-306136/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-306136",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-306136",
	                "name.minikube.sigs.k8s.io": "functional-306136",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "026b98d5912a47cc028b1118707fc943cb77016e0fb3b23fabc78674be985059",
	            "SandboxKey": "/var/run/docker/netns/026b98d5912a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-306136": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:e0:27:70:89:44",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d6d50010608d499198cc437e5e6797c6e30e90961a8a94b594861124d46216fb",
	                    "EndpointID": "61f105d4d5312789546ef27d5cdd391e655fd72e52d4c4dcca9057476aba6367",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-306136",
	                        "f0f24c01d25d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-306136 -n functional-306136
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-306136 logs -n 25: (1.424526977s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ license │                                                                                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │ 18 Oct 25 17:22 UTC │
	│ cp      │ functional-306136 cp functional-306136:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1321492752/001/cp-test.txt                                │ functional-306136 │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │ 18 Oct 25 17:22 UTC │
	│ ssh     │ functional-306136 ssh echo hello                                                                                                                          │ functional-306136 │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │ 18 Oct 25 17:22 UTC │
	│ ssh     │ functional-306136 ssh -n functional-306136 sudo cat /home/docker/cp-test.txt                                                                              │ functional-306136 │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │ 18 Oct 25 17:22 UTC │
	│ ssh     │ functional-306136 ssh cat /etc/hostname                                                                                                                   │ functional-306136 │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │ 18 Oct 25 17:22 UTC │
	│ cp      │ functional-306136 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                 │ functional-306136 │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │ 18 Oct 25 17:22 UTC │
	│ ssh     │ functional-306136 ssh sudo systemctl is-active docker                                                                                                     │ functional-306136 │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │                     │
	│ ssh     │ functional-306136 ssh -n functional-306136 sudo cat /tmp/does/not/exist/cp-test.txt                                                                       │ functional-306136 │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │ 18 Oct 25 17:22 UTC │
	│ ssh     │ functional-306136 ssh sudo systemctl is-active containerd                                                                                                 │ functional-306136 │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │                     │
	│ tunnel  │ functional-306136 tunnel --alsologtostderr                                                                                                                │ functional-306136 │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │                     │
	│ tunnel  │ functional-306136 tunnel --alsologtostderr                                                                                                                │ functional-306136 │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │                     │
	│ image   │ functional-306136 image load --daemon kicbase/echo-server:functional-306136 --alsologtostderr                                                             │ functional-306136 │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │ 18 Oct 25 17:22 UTC │
	│ tunnel  │ functional-306136 tunnel --alsologtostderr                                                                                                                │ functional-306136 │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │                     │
	│ image   │ functional-306136 image ls                                                                                                                                │ functional-306136 │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │ 18 Oct 25 17:22 UTC │
	│ image   │ functional-306136 image load --daemon kicbase/echo-server:functional-306136 --alsologtostderr                                                             │ functional-306136 │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │ 18 Oct 25 17:22 UTC │
	│ image   │ functional-306136 image ls                                                                                                                                │ functional-306136 │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │ 18 Oct 25 17:22 UTC │
	│ image   │ functional-306136 image load --daemon kicbase/echo-server:functional-306136 --alsologtostderr                                                             │ functional-306136 │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │ 18 Oct 25 17:22 UTC │
	│ image   │ functional-306136 image ls                                                                                                                                │ functional-306136 │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │ 18 Oct 25 17:22 UTC │
	│ image   │ functional-306136 image save kicbase/echo-server:functional-306136 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-306136 │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │ 18 Oct 25 17:22 UTC │
	│ image   │ functional-306136 image rm kicbase/echo-server:functional-306136 --alsologtostderr                                                                        │ functional-306136 │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │ 18 Oct 25 17:22 UTC │
	│ image   │ functional-306136 image ls                                                                                                                                │ functional-306136 │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │ 18 Oct 25 17:22 UTC │
	│ image   │ functional-306136 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-306136 │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │ 18 Oct 25 17:22 UTC │
	│ image   │ functional-306136 image save --daemon kicbase/echo-server:functional-306136 --alsologtostderr                                                             │ functional-306136 │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │ 18 Oct 25 17:22 UTC │
	│ addons  │ functional-306136 addons list                                                                                                                             │ functional-306136 │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │ 18 Oct 25 17:22 UTC │
	│ addons  │ functional-306136 addons list -o json                                                                                                                     │ functional-306136 │ jenkins │ v1.37.0 │ 18 Oct 25 17:22 UTC │ 18 Oct 25 17:22 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 17:21:21
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 17:21:21.784394   24312 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:21:21.784505   24312 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:21:21.784509   24312 out.go:374] Setting ErrFile to fd 2...
	I1018 17:21:21.784513   24312 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:21:21.784778   24312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:21:21.785152   24312 out.go:368] Setting JSON to false
	I1018 17:21:21.786057   24312 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3831,"bootTime":1760804251,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 17:21:21.786111   24312 start.go:141] virtualization:  
	I1018 17:21:21.791740   24312 out.go:179] * [functional-306136] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 17:21:21.794843   24312 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 17:21:21.794898   24312 notify.go:220] Checking for updates...
	I1018 17:21:21.800793   24312 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 17:21:21.803686   24312 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:21:21.806559   24312 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 17:21:21.809351   24312 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 17:21:21.812271   24312 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 17:21:21.815610   24312 config.go:182] Loaded profile config "functional-306136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:21:21.815701   24312 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 17:21:21.847818   24312 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 17:21:21.847935   24312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:21:21.907797   24312 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-18 17:21:21.897565507 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:21:21.907899   24312 docker.go:318] overlay module found
	I1018 17:21:21.911207   24312 out.go:179] * Using the docker driver based on existing profile
	I1018 17:21:21.914001   24312 start.go:305] selected driver: docker
	I1018 17:21:21.914011   24312 start.go:925] validating driver "docker" against &{Name:functional-306136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-306136 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:21:21.914110   24312 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 17:21:21.914213   24312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:21:21.971246   24312 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-18 17:21:21.962611015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:21:21.971671   24312 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:21:21.971693   24312 cni.go:84] Creating CNI manager for ""
	I1018 17:21:21.971752   24312 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 17:21:21.971817   24312 start.go:349] cluster config:
	{Name:functional-306136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-306136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:21:21.975010   24312 out.go:179] * Starting "functional-306136" primary control-plane node in "functional-306136" cluster
	I1018 17:21:21.977740   24312 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:21:21.980574   24312 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:21:21.983427   24312 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:21:21.983473   24312 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 17:21:21.983481   24312 cache.go:58] Caching tarball of preloaded images
	I1018 17:21:21.983515   24312 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:21:21.983563   24312 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:21:21.983572   24312 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:21:21.983683   24312 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/config.json ...
	I1018 17:21:22.004064   24312 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:21:22.004079   24312 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:21:22.004093   24312 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:21:22.004213   24312 start.go:360] acquireMachinesLock for functional-306136: {Name:mkcf3ac5771bc88acbe490a673a1af6fb15d5d91 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:21:22.004288   24312 start.go:364] duration metric: took 55.475µs to acquireMachinesLock for "functional-306136"
	I1018 17:21:22.004311   24312 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:21:22.004316   24312 fix.go:54] fixHost starting: 
	I1018 17:21:22.004602   24312 cli_runner.go:164] Run: docker container inspect functional-306136 --format={{.State.Status}}
	I1018 17:21:22.023498   24312 fix.go:112] recreateIfNeeded on functional-306136: state=Running err=<nil>
	W1018 17:21:22.023528   24312 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:21:22.026827   24312 out.go:252] * Updating the running docker "functional-306136" container ...
	I1018 17:21:22.026866   24312 machine.go:93] provisionDockerMachine start ...
	I1018 17:21:22.026949   24312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-306136
	I1018 17:21:22.050423   24312 main.go:141] libmachine: Using SSH client type: native
	I1018 17:21:22.050752   24312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1018 17:21:22.050759   24312 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:21:22.200813   24312 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-306136
	
	I1018 17:21:22.200835   24312 ubuntu.go:182] provisioning hostname "functional-306136"
	I1018 17:21:22.200894   24312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-306136
	I1018 17:21:22.218316   24312 main.go:141] libmachine: Using SSH client type: native
	I1018 17:21:22.218614   24312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1018 17:21:22.218622   24312 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-306136 && echo "functional-306136" | sudo tee /etc/hostname
	I1018 17:21:22.374375   24312 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-306136
	
	I1018 17:21:22.374441   24312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-306136
	I1018 17:21:22.393874   24312 main.go:141] libmachine: Using SSH client type: native
	I1018 17:21:22.394177   24312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1018 17:21:22.394192   24312 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-306136' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-306136/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-306136' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:21:22.542601   24312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:21:22.542619   24312 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:21:22.542648   24312 ubuntu.go:190] setting up certificates
	I1018 17:21:22.542656   24312 provision.go:84] configureAuth start
	I1018 17:21:22.542720   24312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-306136
	I1018 17:21:22.560563   24312 provision.go:143] copyHostCerts
	I1018 17:21:22.560629   24312 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:21:22.560646   24312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:21:22.560724   24312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:21:22.560824   24312 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:21:22.560827   24312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:21:22.560852   24312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:21:22.560909   24312 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:21:22.560912   24312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:21:22.561024   24312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:21:22.561117   24312 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.functional-306136 san=[127.0.0.1 192.168.49.2 functional-306136 localhost minikube]
	I1018 17:21:23.503429   24312 provision.go:177] copyRemoteCerts
	I1018 17:21:23.503489   24312 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:21:23.503526   24312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-306136
	I1018 17:21:23.522468   24312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/functional-306136/id_rsa Username:docker}
	I1018 17:21:23.628445   24312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:21:23.648145   24312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 17:21:23.668321   24312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 17:21:23.686062   24312 provision.go:87] duration metric: took 1.143383988s to configureAuth
	I1018 17:21:23.686079   24312 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:21:23.686279   24312 config.go:182] Loaded profile config "functional-306136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:21:23.686374   24312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-306136
	I1018 17:21:23.703757   24312 main.go:141] libmachine: Using SSH client type: native
	I1018 17:21:23.704051   24312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1018 17:21:23.704063   24312 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:21:29.092880   24312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:21:29.092907   24312 machine.go:96] duration metric: took 7.066020654s to provisionDockerMachine
	I1018 17:21:29.092917   24312 start.go:293] postStartSetup for "functional-306136" (driver="docker")
	I1018 17:21:29.092926   24312 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:21:29.093050   24312 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:21:29.093096   24312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-306136
	I1018 17:21:29.110660   24312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/functional-306136/id_rsa Username:docker}
	I1018 17:21:29.216853   24312 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:21:29.220070   24312 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:21:29.220088   24312 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:21:29.220097   24312 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:21:29.220152   24312 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:21:29.220227   24312 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:21:29.220298   24312 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/test/nested/copy/4320/hosts -> hosts in /etc/test/nested/copy/4320
	I1018 17:21:29.220340   24312 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4320
	I1018 17:21:29.227799   24312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:21:29.245378   24312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/test/nested/copy/4320/hosts --> /etc/test/nested/copy/4320/hosts (40 bytes)
	I1018 17:21:29.263399   24312 start.go:296] duration metric: took 170.468146ms for postStartSetup
	I1018 17:21:29.263490   24312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:21:29.263529   24312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-306136
	I1018 17:21:29.281261   24312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/functional-306136/id_rsa Username:docker}
	I1018 17:21:29.382009   24312 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:21:29.386893   24312 fix.go:56] duration metric: took 7.38257004s for fixHost
	I1018 17:21:29.386916   24312 start.go:83] releasing machines lock for "functional-306136", held for 7.3826196s
	I1018 17:21:29.386984   24312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-306136
	I1018 17:21:29.404065   24312 ssh_runner.go:195] Run: cat /version.json
	I1018 17:21:29.404114   24312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-306136
	I1018 17:21:29.404381   24312 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:21:29.404429   24312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-306136
	I1018 17:21:29.428453   24312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/functional-306136/id_rsa Username:docker}
	I1018 17:21:29.429600   24312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/functional-306136/id_rsa Username:docker}
	I1018 17:21:29.650213   24312 ssh_runner.go:195] Run: systemctl --version
	I1018 17:21:29.657361   24312 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:21:29.699101   24312 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:21:29.703750   24312 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:21:29.703811   24312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:21:29.711559   24312 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:21:29.711573   24312 start.go:495] detecting cgroup driver to use...
	I1018 17:21:29.711607   24312 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:21:29.711655   24312 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:21:29.727105   24312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:21:29.740906   24312 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:21:29.740996   24312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:21:29.756813   24312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:21:29.770270   24312 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:21:29.917504   24312 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:21:30.060963   24312 docker.go:234] disabling docker service ...
	I1018 17:21:30.061028   24312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:21:30.079693   24312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:21:30.094093   24312 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:21:30.239778   24312 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:21:30.380781   24312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:21:30.393732   24312 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:21:30.408551   24312 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:21:30.408615   24312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:21:30.417348   24312 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:21:30.417408   24312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:21:30.426751   24312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:21:30.435703   24312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:21:30.444655   24312 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:21:30.452427   24312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:21:30.460884   24312 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:21:30.469004   24312 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:21:30.477903   24312 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:21:30.485367   24312 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:21:30.492886   24312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:21:30.630335   24312 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:21:30.804225   24312 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:21:30.804295   24312 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:21:30.808104   24312 start.go:563] Will wait 60s for crictl version
	I1018 17:21:30.808156   24312 ssh_runner.go:195] Run: which crictl
	I1018 17:21:30.811575   24312 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:21:30.839390   24312 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:21:30.839479   24312 ssh_runner.go:195] Run: crio --version
	I1018 17:21:30.869932   24312 ssh_runner.go:195] Run: crio --version
	I1018 17:21:30.901840   24312 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:21:30.904732   24312 cli_runner.go:164] Run: docker network inspect functional-306136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:21:30.919931   24312 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:21:30.927158   24312 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1018 17:21:30.930127   24312 kubeadm.go:883] updating cluster {Name:functional-306136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-306136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 17:21:30.930243   24312 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:21:30.930313   24312 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 17:21:30.971510   24312 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 17:21:30.971520   24312 crio.go:433] Images already preloaded, skipping extraction
	I1018 17:21:30.971575   24312 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 17:21:30.997171   24312 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 17:21:30.997182   24312 cache_images.go:85] Images are preloaded, skipping loading
	I1018 17:21:30.997189   24312 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1018 17:21:30.997299   24312 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-306136 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-306136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:21:30.997378   24312 ssh_runner.go:195] Run: crio config
	I1018 17:21:31.061730   24312 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1018 17:21:31.061780   24312 cni.go:84] Creating CNI manager for ""
	I1018 17:21:31.061787   24312 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 17:21:31.061799   24312 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 17:21:31.061822   24312 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-306136 NodeName:functional-306136 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 17:21:31.061940   24312 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-306136"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 17:21:31.062006   24312 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:21:31.069811   24312 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:21:31.069879   24312 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 17:21:31.077501   24312 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 17:21:31.090458   24312 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:21:31.103635   24312 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1018 17:21:31.117163   24312 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 17:21:31.120995   24312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:21:31.253448   24312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:21:31.268028   24312 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136 for IP: 192.168.49.2
	I1018 17:21:31.268039   24312 certs.go:195] generating shared ca certs ...
	I1018 17:21:31.268052   24312 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:21:31.268200   24312 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:21:31.268245   24312 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:21:31.268251   24312 certs.go:257] generating profile certs ...
	I1018 17:21:31.268326   24312 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.key
	I1018 17:21:31.268377   24312 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/apiserver.key.7ada5296
	I1018 17:21:31.268412   24312 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/proxy-client.key
	I1018 17:21:31.268519   24312 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:21:31.268544   24312 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:21:31.268552   24312 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:21:31.268575   24312 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:21:31.268594   24312 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:21:31.268616   24312 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:21:31.268657   24312 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:21:31.269251   24312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:21:31.288054   24312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:21:31.306342   24312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:21:31.324210   24312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:21:31.341911   24312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 17:21:31.359334   24312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 17:21:31.377000   24312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 17:21:31.394583   24312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 17:21:31.412361   24312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:21:31.429757   24312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:21:31.447292   24312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:21:31.464187   24312 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 17:21:31.476644   24312 ssh_runner.go:195] Run: openssl version
	I1018 17:21:31.482897   24312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:21:31.491113   24312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:21:31.494760   24312 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:21:31.494813   24312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:21:31.536024   24312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:21:31.544023   24312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:21:31.552592   24312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:21:31.556445   24312 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:21:31.556504   24312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:21:31.598108   24312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:21:31.607624   24312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:21:31.616437   24312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:21:31.620469   24312 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:21:31.620527   24312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:21:31.662760   24312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:21:31.671098   24312 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:21:31.675055   24312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 17:21:31.716266   24312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 17:21:31.762949   24312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 17:21:31.804290   24312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 17:21:31.845723   24312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 17:21:31.887164   24312 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 17:21:31.928281   24312 kubeadm.go:400] StartCluster: {Name:functional-306136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-306136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:21:31.928357   24312 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:21:31.928426   24312 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:21:31.960103   24312 cri.go:89] found id: "30379cd07f0e2a367ed0582dd79014c45ad5c6333a47d04579074d0bb755ecda"
	I1018 17:21:31.960115   24312 cri.go:89] found id: "0f142d20739cf269db8cdd288879b10ce46eb4e359b9c83fc592d617c6bb276c"
	I1018 17:21:31.960122   24312 cri.go:89] found id: "2a8ded35b5e3e6eb66afd0cc534f74b7898b868999d5eaef36dd59e137dc226e"
	I1018 17:21:31.960134   24312 cri.go:89] found id: "692182a4208b76673fa5832445728142b4439f36c8a3cfabfa83ca523f12eff9"
	I1018 17:21:31.960136   24312 cri.go:89] found id: "6d982be55a3fb4becc44a96333efe45cf25ef5dfa2ee1690529fe62988804eaa"
	I1018 17:21:31.960139   24312 cri.go:89] found id: "0bd3b67ad9f1a557cb80b58eb40e6835ec2fe7b197a64f190ad0ec3fe541654f"
	I1018 17:21:31.960141   24312 cri.go:89] found id: "acef8ceccf442e4753d77ab52b88c88b6b4ffe6f252c2c197a6b17fc26e53f32"
	I1018 17:21:31.960143   24312 cri.go:89] found id: "bf3ae197ca4172d03e7c65fd5e4cb2895a8fc7a36ad847f40c17ae8b2f777751"
	I1018 17:21:31.960145   24312 cri.go:89] found id: ""
	I1018 17:21:31.960196   24312 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 17:21:31.971525   24312 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:21:31Z" level=error msg="open /run/runc: no such file or directory"
	I1018 17:21:31.971591   24312 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 17:21:31.979716   24312 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 17:21:31.979724   24312 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 17:21:31.979779   24312 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 17:21:31.987519   24312 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:21:31.988022   24312 kubeconfig.go:125] found "functional-306136" server: "https://192.168.49.2:8441"
	I1018 17:21:31.989276   24312 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 17:21:31.997272   24312 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-18 17:19:26.317939377 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-18 17:21:31.111704914 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1018 17:21:31.997281   24312 kubeadm.go:1160] stopping kube-system containers ...
	I1018 17:21:31.997292   24312 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1018 17:21:31.997349   24312 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:21:32.027880   24312 cri.go:89] found id: "30379cd07f0e2a367ed0582dd79014c45ad5c6333a47d04579074d0bb755ecda"
	I1018 17:21:32.027891   24312 cri.go:89] found id: "0f142d20739cf269db8cdd288879b10ce46eb4e359b9c83fc592d617c6bb276c"
	I1018 17:21:32.027895   24312 cri.go:89] found id: "2a8ded35b5e3e6eb66afd0cc534f74b7898b868999d5eaef36dd59e137dc226e"
	I1018 17:21:32.027898   24312 cri.go:89] found id: "692182a4208b76673fa5832445728142b4439f36c8a3cfabfa83ca523f12eff9"
	I1018 17:21:32.027901   24312 cri.go:89] found id: "6d982be55a3fb4becc44a96333efe45cf25ef5dfa2ee1690529fe62988804eaa"
	I1018 17:21:32.027904   24312 cri.go:89] found id: "0bd3b67ad9f1a557cb80b58eb40e6835ec2fe7b197a64f190ad0ec3fe541654f"
	I1018 17:21:32.027907   24312 cri.go:89] found id: "acef8ceccf442e4753d77ab52b88c88b6b4ffe6f252c2c197a6b17fc26e53f32"
	I1018 17:21:32.027909   24312 cri.go:89] found id: "bf3ae197ca4172d03e7c65fd5e4cb2895a8fc7a36ad847f40c17ae8b2f777751"
	I1018 17:21:32.027912   24312 cri.go:89] found id: ""
	I1018 17:21:32.027922   24312 cri.go:252] Stopping containers: [30379cd07f0e2a367ed0582dd79014c45ad5c6333a47d04579074d0bb755ecda 0f142d20739cf269db8cdd288879b10ce46eb4e359b9c83fc592d617c6bb276c 2a8ded35b5e3e6eb66afd0cc534f74b7898b868999d5eaef36dd59e137dc226e 692182a4208b76673fa5832445728142b4439f36c8a3cfabfa83ca523f12eff9 6d982be55a3fb4becc44a96333efe45cf25ef5dfa2ee1690529fe62988804eaa 0bd3b67ad9f1a557cb80b58eb40e6835ec2fe7b197a64f190ad0ec3fe541654f acef8ceccf442e4753d77ab52b88c88b6b4ffe6f252c2c197a6b17fc26e53f32 bf3ae197ca4172d03e7c65fd5e4cb2895a8fc7a36ad847f40c17ae8b2f777751]
	I1018 17:21:32.027982   24312 ssh_runner.go:195] Run: which crictl
	I1018 17:21:32.031965   24312 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 30379cd07f0e2a367ed0582dd79014c45ad5c6333a47d04579074d0bb755ecda 0f142d20739cf269db8cdd288879b10ce46eb4e359b9c83fc592d617c6bb276c 2a8ded35b5e3e6eb66afd0cc534f74b7898b868999d5eaef36dd59e137dc226e 692182a4208b76673fa5832445728142b4439f36c8a3cfabfa83ca523f12eff9 6d982be55a3fb4becc44a96333efe45cf25ef5dfa2ee1690529fe62988804eaa 0bd3b67ad9f1a557cb80b58eb40e6835ec2fe7b197a64f190ad0ec3fe541654f acef8ceccf442e4753d77ab52b88c88b6b4ffe6f252c2c197a6b17fc26e53f32 bf3ae197ca4172d03e7c65fd5e4cb2895a8fc7a36ad847f40c17ae8b2f777751
	I1018 17:21:32.096118   24312 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1018 17:21:32.212221   24312 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 17:21:32.220472   24312 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct 18 17:19 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct 18 17:19 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct 18 17:19 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct 18 17:19 /etc/kubernetes/scheduler.conf
	
	I1018 17:21:32.220533   24312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1018 17:21:32.229009   24312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1018 17:21:32.237045   24312 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:21:32.237101   24312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 17:21:32.244809   24312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1018 17:21:32.253066   24312 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:21:32.253132   24312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 17:21:32.260859   24312 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1018 17:21:32.268609   24312 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:21:32.268662   24312 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 17:21:32.276379   24312 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 17:21:32.284347   24312 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 17:21:32.333490   24312 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 17:21:34.746960   24312 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.413446377s)
	I1018 17:21:34.747020   24312 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1018 17:21:34.981530   24312 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 17:21:35.058447   24312 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1018 17:21:35.135324   24312 api_server.go:52] waiting for apiserver process to appear ...
	I1018 17:21:35.135402   24312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:21:35.635505   24312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:21:36.136389   24312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:21:36.153911   24312 api_server.go:72] duration metric: took 1.018597058s to wait for apiserver process to appear ...
	I1018 17:21:36.153925   24312 api_server.go:88] waiting for apiserver healthz status ...
	I1018 17:21:36.153942   24312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 17:21:38.534619   24312 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 17:21:38.534634   24312 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 17:21:38.534647   24312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 17:21:38.621913   24312 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 17:21:38.621929   24312 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 17:21:38.654103   24312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 17:21:38.679267   24312 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 17:21:38.679281   24312 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 17:21:39.154691   24312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 17:21:39.172666   24312 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 17:21:39.172684   24312 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 17:21:39.654016   24312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 17:21:39.672640   24312 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 17:21:39.672658   24312 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 17:21:40.154052   24312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 17:21:40.162867   24312 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1018 17:21:40.177059   24312 api_server.go:141] control plane version: v1.34.1
	I1018 17:21:40.177075   24312 api_server.go:131] duration metric: took 4.023145473s to wait for apiserver health ...
	I1018 17:21:40.177083   24312 cni.go:84] Creating CNI manager for ""
	I1018 17:21:40.177091   24312 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 17:21:40.180721   24312 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 17:21:40.183632   24312 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 17:21:40.188141   24312 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 17:21:40.188151   24312 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 17:21:40.209171   24312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 17:21:40.779959   24312 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 17:21:40.784132   24312 system_pods.go:59] 8 kube-system pods found
	I1018 17:21:40.784166   24312 system_pods.go:61] "coredns-66bc5c9577-s28sz" [1443df00-ddc5-405a-a614-04d9dae28e1e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:21:40.784179   24312 system_pods.go:61] "etcd-functional-306136" [80d9e19b-eea8-4489-be5d-9fde983c3df9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 17:21:40.784184   24312 system_pods.go:61] "kindnet-5lszn" [76dbca33-ff96-4834-b859-c54148dc6ed6] Running
	I1018 17:21:40.784191   24312 system_pods.go:61] "kube-apiserver-functional-306136" [5e4163c5-f999-4ff0-a15f-72cd85fcd8d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 17:21:40.784197   24312 system_pods.go:61] "kube-controller-manager-functional-306136" [9528c7c6-56f7-472e-9fce-4107a05b7a09] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 17:21:40.784201   24312 system_pods.go:61] "kube-proxy-vz6kt" [055e7d48-c11a-4cf7-bdca-dd22a5e72dd4] Running
	I1018 17:21:40.784207   24312 system_pods.go:61] "kube-scheduler-functional-306136" [08cd22ca-a4b0-4d7b-b06f-27e3a5f32b17] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 17:21:40.784210   24312 system_pods.go:61] "storage-provisioner" [f921b3a5-63a0-4ac8-b575-51756da2bc07] Running
	I1018 17:21:40.784215   24312 system_pods.go:74] duration metric: took 4.246075ms to wait for pod list to return data ...
	I1018 17:21:40.784221   24312 node_conditions.go:102] verifying NodePressure condition ...
	I1018 17:21:40.786987   24312 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:21:40.787006   24312 node_conditions.go:123] node cpu capacity is 2
	I1018 17:21:40.787016   24312 node_conditions.go:105] duration metric: took 2.7919ms to run NodePressure ...
	I1018 17:21:40.787084   24312 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 17:21:41.052131   24312 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1018 17:21:41.061598   24312 kubeadm.go:743] kubelet initialised
	I1018 17:21:41.061608   24312 kubeadm.go:744] duration metric: took 9.464955ms waiting for restarted kubelet to initialise ...
	I1018 17:21:41.061639   24312 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 17:21:41.071603   24312 ops.go:34] apiserver oom_adj: -16
	I1018 17:21:41.071631   24312 kubeadm.go:601] duration metric: took 9.091901825s to restartPrimaryControlPlane
	I1018 17:21:41.071639   24312 kubeadm.go:402] duration metric: took 9.143367063s to StartCluster
	I1018 17:21:41.071654   24312 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:21:41.071741   24312 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:21:41.072470   24312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:21:41.072930   24312 config.go:182] Loaded profile config "functional-306136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:21:41.072743   24312 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:21:41.073100   24312 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 17:21:41.073161   24312 addons.go:69] Setting storage-provisioner=true in profile "functional-306136"
	I1018 17:21:41.073174   24312 addons.go:238] Setting addon storage-provisioner=true in "functional-306136"
	W1018 17:21:41.073178   24312 addons.go:247] addon storage-provisioner should already be in state true
	I1018 17:21:41.073206   24312 host.go:66] Checking if "functional-306136" exists ...
	I1018 17:21:41.073642   24312 cli_runner.go:164] Run: docker container inspect functional-306136 --format={{.State.Status}}
	I1018 17:21:41.074124   24312 addons.go:69] Setting default-storageclass=true in profile "functional-306136"
	I1018 17:21:41.074138   24312 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-306136"
	I1018 17:21:41.074413   24312 cli_runner.go:164] Run: docker container inspect functional-306136 --format={{.State.Status}}
	I1018 17:21:41.077080   24312 out.go:179] * Verifying Kubernetes components...
	I1018 17:21:41.080196   24312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:21:41.109560   24312 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 17:21:41.110231   24312 addons.go:238] Setting addon default-storageclass=true in "functional-306136"
	W1018 17:21:41.110240   24312 addons.go:247] addon default-storageclass should already be in state true
	I1018 17:21:41.110261   24312 host.go:66] Checking if "functional-306136" exists ...
	I1018 17:21:41.110661   24312 cli_runner.go:164] Run: docker container inspect functional-306136 --format={{.State.Status}}
	I1018 17:21:41.113933   24312 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 17:21:41.113944   24312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 17:21:41.114005   24312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-306136
	I1018 17:21:41.145718   24312 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 17:21:41.145729   24312 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 17:21:41.145788   24312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-306136
	I1018 17:21:41.148580   24312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/functional-306136/id_rsa Username:docker}
	I1018 17:21:41.177816   24312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/functional-306136/id_rsa Username:docker}
	I1018 17:21:41.342881   24312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 17:21:41.368576   24312 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:21:41.374821   24312 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 17:21:42.530668   24312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.187763017s)
	I1018 17:21:42.530716   24312 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.162129497s)
	I1018 17:21:42.530734   24312 node_ready.go:35] waiting up to 6m0s for node "functional-306136" to be "Ready" ...
	I1018 17:21:42.530929   24312 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.156095467s)
	I1018 17:21:42.533850   24312 node_ready.go:49] node "functional-306136" is "Ready"
	I1018 17:21:42.533865   24312 node_ready.go:38] duration metric: took 3.122153ms for node "functional-306136" to be "Ready" ...
	I1018 17:21:42.533875   24312 api_server.go:52] waiting for apiserver process to appear ...
	I1018 17:21:42.533934   24312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:21:42.541203   24312 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 17:21:42.544001   24312 addons.go:514] duration metric: took 1.470875959s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 17:21:42.548232   24312 api_server.go:72] duration metric: took 1.475184492s to wait for apiserver process to appear ...
	I1018 17:21:42.548244   24312 api_server.go:88] waiting for apiserver healthz status ...
	I1018 17:21:42.548262   24312 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 17:21:42.557636   24312 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1018 17:21:42.558842   24312 api_server.go:141] control plane version: v1.34.1
	I1018 17:21:42.558857   24312 api_server.go:131] duration metric: took 10.607699ms to wait for apiserver health ...
	I1018 17:21:42.558865   24312 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 17:21:42.562463   24312 system_pods.go:59] 8 kube-system pods found
	I1018 17:21:42.562480   24312 system_pods.go:61] "coredns-66bc5c9577-s28sz" [1443df00-ddc5-405a-a614-04d9dae28e1e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:21:42.562487   24312 system_pods.go:61] "etcd-functional-306136" [80d9e19b-eea8-4489-be5d-9fde983c3df9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 17:21:42.562492   24312 system_pods.go:61] "kindnet-5lszn" [76dbca33-ff96-4834-b859-c54148dc6ed6] Running
	I1018 17:21:42.562499   24312 system_pods.go:61] "kube-apiserver-functional-306136" [5e4163c5-f999-4ff0-a15f-72cd85fcd8d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 17:21:42.562505   24312 system_pods.go:61] "kube-controller-manager-functional-306136" [9528c7c6-56f7-472e-9fce-4107a05b7a09] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 17:21:42.562508   24312 system_pods.go:61] "kube-proxy-vz6kt" [055e7d48-c11a-4cf7-bdca-dd22a5e72dd4] Running
	I1018 17:21:42.562514   24312 system_pods.go:61] "kube-scheduler-functional-306136" [08cd22ca-a4b0-4d7b-b06f-27e3a5f32b17] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 17:21:42.562517   24312 system_pods.go:61] "storage-provisioner" [f921b3a5-63a0-4ac8-b575-51756da2bc07] Running
	I1018 17:21:42.562522   24312 system_pods.go:74] duration metric: took 3.652679ms to wait for pod list to return data ...
	I1018 17:21:42.562529   24312 default_sa.go:34] waiting for default service account to be created ...
	I1018 17:21:42.565161   24312 default_sa.go:45] found service account: "default"
	I1018 17:21:42.565173   24312 default_sa.go:55] duration metric: took 2.639742ms for default service account to be created ...
	I1018 17:21:42.565181   24312 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 17:21:42.568233   24312 system_pods.go:86] 8 kube-system pods found
	I1018 17:21:42.568252   24312 system_pods.go:89] "coredns-66bc5c9577-s28sz" [1443df00-ddc5-405a-a614-04d9dae28e1e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:21:42.568260   24312 system_pods.go:89] "etcd-functional-306136" [80d9e19b-eea8-4489-be5d-9fde983c3df9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 17:21:42.568273   24312 system_pods.go:89] "kindnet-5lszn" [76dbca33-ff96-4834-b859-c54148dc6ed6] Running
	I1018 17:21:42.568279   24312 system_pods.go:89] "kube-apiserver-functional-306136" [5e4163c5-f999-4ff0-a15f-72cd85fcd8d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 17:21:42.568284   24312 system_pods.go:89] "kube-controller-manager-functional-306136" [9528c7c6-56f7-472e-9fce-4107a05b7a09] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 17:21:42.568287   24312 system_pods.go:89] "kube-proxy-vz6kt" [055e7d48-c11a-4cf7-bdca-dd22a5e72dd4] Running
	I1018 17:21:42.568295   24312 system_pods.go:89] "kube-scheduler-functional-306136" [08cd22ca-a4b0-4d7b-b06f-27e3a5f32b17] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 17:21:42.568299   24312 system_pods.go:89] "storage-provisioner" [f921b3a5-63a0-4ac8-b575-51756da2bc07] Running
	I1018 17:21:42.568305   24312 system_pods.go:126] duration metric: took 3.119405ms to wait for k8s-apps to be running ...
	I1018 17:21:42.568312   24312 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 17:21:42.568382   24312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 17:21:42.581890   24312 system_svc.go:56] duration metric: took 13.567298ms WaitForService to wait for kubelet
	I1018 17:21:42.581908   24312 kubeadm.go:586] duration metric: took 1.508864493s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:21:42.581926   24312 node_conditions.go:102] verifying NodePressure condition ...
	I1018 17:21:42.585784   24312 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:21:42.585799   24312 node_conditions.go:123] node cpu capacity is 2
	I1018 17:21:42.585809   24312 node_conditions.go:105] duration metric: took 3.877962ms to run NodePressure ...
	I1018 17:21:42.585820   24312 start.go:241] waiting for startup goroutines ...
	I1018 17:21:42.585826   24312 start.go:246] waiting for cluster config update ...
	I1018 17:21:42.585836   24312 start.go:255] writing updated cluster config ...
	I1018 17:21:42.586146   24312 ssh_runner.go:195] Run: rm -f paused
	I1018 17:21:42.589778   24312 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 17:21:42.594182   24312 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s28sz" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 17:21:44.600800   24312 pod_ready.go:104] pod "coredns-66bc5c9577-s28sz" is not "Ready", error: <nil>
	I1018 17:21:46.109547   24312 pod_ready.go:94] pod "coredns-66bc5c9577-s28sz" is "Ready"
	I1018 17:21:46.109560   24312 pod_ready.go:86] duration metric: took 3.515365953s for pod "coredns-66bc5c9577-s28sz" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:21:46.116957   24312 pod_ready.go:83] waiting for pod "etcd-functional-306136" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:21:46.623404   24312 pod_ready.go:94] pod "etcd-functional-306136" is "Ready"
	I1018 17:21:46.623418   24312 pod_ready.go:86] duration metric: took 506.449399ms for pod "etcd-functional-306136" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:21:46.625840   24312 pod_ready.go:83] waiting for pod "kube-apiserver-functional-306136" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:21:46.630617   24312 pod_ready.go:94] pod "kube-apiserver-functional-306136" is "Ready"
	I1018 17:21:46.630631   24312 pod_ready.go:86] duration metric: took 4.777807ms for pod "kube-apiserver-functional-306136" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:21:46.633084   24312 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-306136" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 17:21:48.639369   24312 pod_ready.go:104] pod "kube-controller-manager-functional-306136" is not "Ready", error: <nil>
	W1018 17:21:50.642848   24312 pod_ready.go:104] pod "kube-controller-manager-functional-306136" is not "Ready", error: <nil>
	I1018 17:21:53.140726   24312 pod_ready.go:94] pod "kube-controller-manager-functional-306136" is "Ready"
	I1018 17:21:53.140739   24312 pod_ready.go:86] duration metric: took 6.507643093s for pod "kube-controller-manager-functional-306136" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:21:53.143119   24312 pod_ready.go:83] waiting for pod "kube-proxy-vz6kt" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:21:53.147687   24312 pod_ready.go:94] pod "kube-proxy-vz6kt" is "Ready"
	I1018 17:21:53.147702   24312 pod_ready.go:86] duration metric: took 4.571331ms for pod "kube-proxy-vz6kt" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:21:53.149915   24312 pod_ready.go:83] waiting for pod "kube-scheduler-functional-306136" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:21:53.154305   24312 pod_ready.go:94] pod "kube-scheduler-functional-306136" is "Ready"
	I1018 17:21:53.154318   24312 pod_ready.go:86] duration metric: took 4.391119ms for pod "kube-scheduler-functional-306136" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:21:53.154328   24312 pod_ready.go:40] duration metric: took 10.564530318s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 17:21:53.209173   24312 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 17:21:53.212286   24312 out.go:179] * Done! kubectl is now configured to use "functional-306136" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 17:22:34 functional-306136 crio[3717]: time="2025-10-18T17:22:34.186323572Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-hd288 Namespace:default ID:4ecb7a35216ca1a1a50025c53cc52cff8d4436b9ae7880a2c28e94037382e353 UID:f5efbc5c-c932-4007-be3f-89964ee5ac25 NetNS:/var/run/netns/0c3c1ea0-4441-42d4-a20e-7ffd8fcec75b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079d80}] Aliases:map[]}"
	Oct 18 17:22:34 functional-306136 crio[3717]: time="2025-10-18T17:22:34.186510619Z" level=info msg="Checking pod default_hello-node-75c85bcc94-hd288 for CNI network kindnet (type=ptp)"
	Oct 18 17:22:34 functional-306136 crio[3717]: time="2025-10-18T17:22:34.189614482Z" level=info msg="Ran pod sandbox 4ecb7a35216ca1a1a50025c53cc52cff8d4436b9ae7880a2c28e94037382e353 with infra container: default/hello-node-75c85bcc94-hd288/POD" id=87783db0-7ec0-4f62-8ee7-e51fe62e6af4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 17:22:34 functional-306136 crio[3717]: time="2025-10-18T17:22:34.193615269Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f54ff755-d905-4173-8f1d-d5cea9d91e7c name=/runtime.v1.ImageService/PullImage
	Oct 18 17:22:35 functional-306136 crio[3717]: time="2025-10-18T17:22:35.107188123Z" level=info msg="Stopping pod sandbox: 476589b600277054855d69aaec336e0f0f22cd37fb4451edbde98d8c1a9cb8aa" id=d5d44e1a-7996-4083-8060-62f0b1566a05 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 17:22:35 functional-306136 crio[3717]: time="2025-10-18T17:22:35.107255743Z" level=info msg="Stopped pod sandbox (already stopped): 476589b600277054855d69aaec336e0f0f22cd37fb4451edbde98d8c1a9cb8aa" id=d5d44e1a-7996-4083-8060-62f0b1566a05 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 17:22:35 functional-306136 crio[3717]: time="2025-10-18T17:22:35.109512029Z" level=info msg="Removing pod sandbox: 476589b600277054855d69aaec336e0f0f22cd37fb4451edbde98d8c1a9cb8aa" id=7bf5a06d-16d2-41f2-8071-c53fbc4675a9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 17:22:35 functional-306136 crio[3717]: time="2025-10-18T17:22:35.113395071Z" level=info msg="Removed pod sandbox: 476589b600277054855d69aaec336e0f0f22cd37fb4451edbde98d8c1a9cb8aa" id=7bf5a06d-16d2-41f2-8071-c53fbc4675a9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 17:22:35 functional-306136 crio[3717]: time="2025-10-18T17:22:35.114019432Z" level=info msg="Stopping pod sandbox: efe0458607481d520768bbe67f6dd41b029dd25a2bb5ce42cb588348675443ba" id=5c4b60e1-9363-48d1-849e-c16d91b6e69b name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 17:22:35 functional-306136 crio[3717]: time="2025-10-18T17:22:35.114073275Z" level=info msg="Stopped pod sandbox (already stopped): efe0458607481d520768bbe67f6dd41b029dd25a2bb5ce42cb588348675443ba" id=5c4b60e1-9363-48d1-849e-c16d91b6e69b name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 17:22:35 functional-306136 crio[3717]: time="2025-10-18T17:22:35.114774504Z" level=info msg="Removing pod sandbox: efe0458607481d520768bbe67f6dd41b029dd25a2bb5ce42cb588348675443ba" id=6cb92f32-c613-4f59-bde4-61c377c49387 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 17:22:35 functional-306136 crio[3717]: time="2025-10-18T17:22:35.12208047Z" level=info msg="Removed pod sandbox: efe0458607481d520768bbe67f6dd41b029dd25a2bb5ce42cb588348675443ba" id=6cb92f32-c613-4f59-bde4-61c377c49387 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 17:22:35 functional-306136 crio[3717]: time="2025-10-18T17:22:35.125951113Z" level=info msg="Stopping pod sandbox: 41613d1b2bad03333112413d2ba605790597438e1045bb2c89304f15e01140e4" id=c5f76f94-41ae-48f7-9dfe-84cafd7413e2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 17:22:35 functional-306136 crio[3717]: time="2025-10-18T17:22:35.126043685Z" level=info msg="Stopped pod sandbox (already stopped): 41613d1b2bad03333112413d2ba605790597438e1045bb2c89304f15e01140e4" id=c5f76f94-41ae-48f7-9dfe-84cafd7413e2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 17:22:35 functional-306136 crio[3717]: time="2025-10-18T17:22:35.126457172Z" level=info msg="Removing pod sandbox: 41613d1b2bad03333112413d2ba605790597438e1045bb2c89304f15e01140e4" id=9cca0c1e-79b6-429d-8434-0b08db2b302b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 17:22:35 functional-306136 crio[3717]: time="2025-10-18T17:22:35.12995323Z" level=info msg="Removed pod sandbox: 41613d1b2bad03333112413d2ba605790597438e1045bb2c89304f15e01140e4" id=9cca0c1e-79b6-429d-8434-0b08db2b302b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 17:22:45 functional-306136 crio[3717]: time="2025-10-18T17:22:45.139793998Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=67af30f6-11cc-4a81-a1c4-d0487fb0d1fe name=/runtime.v1.ImageService/PullImage
	Oct 18 17:22:57 functional-306136 crio[3717]: time="2025-10-18T17:22:57.137952636Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=0f9a98d5-a08b-4f0f-aeea-2ebd4d49e33e name=/runtime.v1.ImageService/PullImage
	Oct 18 17:23:09 functional-306136 crio[3717]: time="2025-10-18T17:23:09.138015567Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c88ba3f1-c81c-4d86-b5c1-938a750b98f2 name=/runtime.v1.ImageService/PullImage
	Oct 18 17:23:50 functional-306136 crio[3717]: time="2025-10-18T17:23:50.138649754Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a5fc97a8-bec8-4f68-8740-dab88bccab93 name=/runtime.v1.ImageService/PullImage
	Oct 18 17:23:51 functional-306136 crio[3717]: time="2025-10-18T17:23:51.138277808Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c57e0148-eef6-4489-972b-7cfac2561349 name=/runtime.v1.ImageService/PullImage
	Oct 18 17:25:19 functional-306136 crio[3717]: time="2025-10-18T17:25:19.138417823Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=840b2fcb-b893-459b-b28f-2b89609f2f43 name=/runtime.v1.ImageService/PullImage
	Oct 18 17:25:23 functional-306136 crio[3717]: time="2025-10-18T17:25:23.13918632Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2dd2b9a5-740f-419d-a5f5-63e7ababd81c name=/runtime.v1.ImageService/PullImage
	Oct 18 17:28:05 functional-306136 crio[3717]: time="2025-10-18T17:28:05.139393556Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=94f5d9a8-267f-445f-b941-5817939002a9 name=/runtime.v1.ImageService/PullImage
	Oct 18 17:28:17 functional-306136 crio[3717]: time="2025-10-18T17:28:17.138688064Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=0e3227bb-b57f-485a-9d0f-1111a4af81b9 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	275c12f9e5518       docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a   9 minutes ago       Running             myfrontend                0                   35e9067ec23a9       sp-pod                                      default
	a07fc0ba51c82       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0   10 minutes ago      Running             nginx                     0                   dc0121fce0e9e       nginx-svc                                   default
	bc0645d36d862       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   3                   ce6989bc6676e       coredns-66bc5c9577-s28sz                    kube-system
	6377853198c6d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       3                   8a44b73339f3c       storage-provisioner                         kube-system
	e267a0c370545       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                3                   96616199fab74       kube-proxy-vz6kt                            kube-system
	267a5d42ee8ec       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               3                   53f6721954e83       kindnet-5lszn                               kube-system
	9a2fa09848cd1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   7ac6c0c7077a0       kube-apiserver-functional-306136            kube-system
	35af8a99c93ef       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   3                   6c53b411e759e       kube-controller-manager-functional-306136   kube-system
	e06ca0f8997f2       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            3                   cd93a457fe1c7       kube-scheduler-functional-306136            kube-system
	ee539505abbec       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      3                   bd8169423a0ea       etcd-functional-306136                      kube-system
	30379cd07f0e2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   2                   ce6989bc6676e       coredns-66bc5c9577-s28sz                    kube-system
	0f142d20739cf       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       2                   8a44b73339f3c       storage-provisioner                         kube-system
	2a8ded35b5e3e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            2                   cd93a457fe1c7       kube-scheduler-functional-306136            kube-system
	6d982be55a3fb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   2                   6c53b411e759e       kube-controller-manager-functional-306136   kube-system
	0bd3b67ad9f1a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                2                   96616199fab74       kube-proxy-vz6kt                            kube-system
	acef8ceccf442       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               2                   53f6721954e83       kindnet-5lszn                               kube-system
	bf3ae197ca417       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      2                   bd8169423a0ea       etcd-functional-306136                      kube-system
	
	
	==> coredns [30379cd07f0e2a367ed0582dd79014c45ad5c6333a47d04579074d0bb755ecda] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37328 - 12854 "HINFO IN 7920649976226157958.471063933017629692. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.040735214s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bc0645d36d862714a3a3cf393eba88cafbc311d1e264a77f3c5925810d961fd6] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35999 - 23061 "HINFO IN 9166400379127503040.8333006921420215223. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022700597s
	
	
	==> describe nodes <==
	Name:               functional-306136
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-306136
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=functional-306136
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T17_19_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:19:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-306136
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:32:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 17:32:10 +0000   Sat, 18 Oct 2025 17:19:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 17:32:10 +0000   Sat, 18 Oct 2025 17:19:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 17:32:10 +0000   Sat, 18 Oct 2025 17:19:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 17:32:10 +0000   Sat, 18 Oct 2025 17:20:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-306136
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                4d1595ce-54ec-4259-b771-8819c31833c7
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-hd288                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m43s
	  default                     hello-node-connect-7d85dfc575-vcjkz          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m49s
	  kube-system                 coredns-66bc5c9577-s28sz                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-306136                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-5lszn                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-306136             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-306136    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-vz6kt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-306136             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-306136 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-306136 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-306136 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-306136 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-306136 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-306136 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node functional-306136 event: Registered Node functional-306136 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-306136 status is now: NodeReady
	  Warning  ContainerGCFailed        11m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           11m                node-controller  Node functional-306136 event: Registered Node functional-306136 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-306136 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-306136 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-306136 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-306136 event: Registered Node functional-306136 in Controller
	
	
	==> dmesg <==
	[Oct18 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014995] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035776] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.808632] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.418900] kauditd_printk_skb: 36 callbacks suppressed
	[Oct18 17:12] overlayfs: idmapped layers are currently not supported
	[  +0.082393] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct18 17:18] overlayfs: idmapped layers are currently not supported
	[Oct18 17:19] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [bf3ae197ca4172d03e7c65fd5e4cb2895a8fc7a36ad847f40c17ae8b2f777751] <==
	{"level":"warn","ts":"2025-10-18T17:20:57.089776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:20:57.117261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:20:57.161378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:20:57.208551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:20:57.242564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:20:57.271569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:20:57.414170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57832","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T17:21:23.872587Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T17:21:23.872648Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-306136","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-18T17:21:23.872763Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T17:21:24.013900Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T17:21:24.013974Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T17:21:24.014002Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-18T17:21:24.014084Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-18T17:21:24.014105Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-18T17:21:24.014155Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T17:21:24.014188Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T17:21:24.014196Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T17:21:24.014233Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T17:21:24.014247Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T17:21:24.014254Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T17:21:24.017801Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-18T17:21:24.017900Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T17:21:24.017932Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-18T17:21:24.017939Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-306136","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [ee539505abbec6280d9ef75313b57f42bd28da550961d828632d49a3eb0f09dd] <==
	{"level":"warn","ts":"2025-10-18T17:21:37.118218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:21:37.139365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:21:37.157480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:21:37.171790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:21:37.190896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:21:37.204989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:21:37.231398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:21:37.252855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:21:37.277167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:21:37.293824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:21:37.332874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:21:37.342230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:21:37.377626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:21:37.401056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:21:37.417051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:21:37.440698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:21:37.453121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:21:37.475118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:21:37.521358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:21:37.529265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:21:37.548413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T17:21:37.621013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48564","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T17:31:36.296289Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1159}
	{"level":"info","ts":"2025-10-18T17:31:36.320563Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1159,"took":"23.910896ms","hash":1282796478,"current-db-size-bytes":3317760,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1495040,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-10-18T17:31:36.320702Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1282796478,"revision":1159,"compact-revision":-1}
	
	
	==> kernel <==
	 17:32:16 up  1:14,  0 user,  load average: 0.55, 0.44, 0.52
	Linux functional-306136 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [267a5d42ee8ec9583e6e7d3e82c29345dae0a87cd34427e3de0c7aa2f31c2da5] <==
	I1018 17:30:09.817403       1 main.go:301] handling current node
	I1018 17:30:19.810411       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:30:19.810443       1 main.go:301] handling current node
	I1018 17:30:29.810268       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:30:29.810309       1 main.go:301] handling current node
	I1018 17:30:39.810424       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:30:39.810533       1 main.go:301] handling current node
	I1018 17:30:49.809990       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:30:49.810025       1 main.go:301] handling current node
	I1018 17:30:59.810429       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:30:59.810461       1 main.go:301] handling current node
	I1018 17:31:09.817037       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:31:09.817072       1 main.go:301] handling current node
	I1018 17:31:19.810410       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:31:19.810444       1 main.go:301] handling current node
	I1018 17:31:29.809786       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:31:29.809822       1 main.go:301] handling current node
	I1018 17:31:39.817947       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:31:39.818058       1 main.go:301] handling current node
	I1018 17:31:49.810350       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:31:49.810395       1 main.go:301] handling current node
	I1018 17:31:59.810435       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:31:59.810563       1 main.go:301] handling current node
	I1018 17:32:09.817742       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:32:09.817920       1 main.go:301] handling current node
	
	
	==> kindnet [acef8ceccf442e4753d77ab52b88c88b6b4ffe6f252c2c197a6b17fc26e53f32] <==
	I1018 17:20:53.625067       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 17:20:53.625461       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1018 17:20:53.625606       1 main.go:148] setting mtu 1500 for CNI 
	I1018 17:20:53.625619       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 17:20:53.625634       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T17:20:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 17:20:53.906725       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 17:20:53.906804       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 17:20:53.906837       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 17:20:53.909398       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 17:20:58.912813       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 17:20:58.912890       1 metrics.go:72] Registering metrics
	I1018 17:20:58.912985       1 controller.go:711] "Syncing nftables rules"
	I1018 17:21:03.909090       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:21:03.909234       1 main.go:301] handling current node
	I1018 17:21:13.906149       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:21:13.906183       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9a2fa09848cd136310df4801e014650144cb67d3df680f314ffe11ce3ad512dc] <==
	I1018 17:21:38.781959       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1018 17:21:38.790368       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 17:21:38.797212       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 17:21:38.801481       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 17:21:38.806360       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 17:21:38.806437       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 17:21:38.806994       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 17:21:38.808011       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 17:21:38.812321       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 17:21:39.167096       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 17:21:39.387852       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 17:21:40.770582       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 17:21:40.935452       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 17:21:41.017615       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 17:21:41.029090       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 17:21:41.960580       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 17:21:42.186579       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 17:21:42.232507       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 17:21:56.552254       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.209.216"}
	I1018 17:22:04.082703       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.225.241"}
	I1018 17:22:14.637949       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.225.195"}
	E1018 17:22:26.986867       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:44996: use of closed network connection
	E1018 17:22:27.445607       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1018 17:22:33.978018       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.47.93"}
	I1018 17:31:38.704839       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [35af8a99c93ef10d2b89027d4639f407fca9ac581d61a19058df428217cb7ad7] <==
	I1018 17:21:41.916625       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 17:21:41.916642       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 17:21:41.923546       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 17:21:41.923631       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 17:21:41.923703       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-306136"
	I1018 17:21:41.923747       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 17:21:41.926486       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 17:21:41.928414       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 17:21:41.928678       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 17:21:41.924387       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 17:21:41.931838       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 17:21:41.931888       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 17:21:41.932001       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 17:21:41.932015       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 17:21:41.932021       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 17:21:41.955156       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 17:21:41.955528       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 17:21:41.961265       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 17:21:41.963701       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 17:21:41.965352       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 17:21:41.965422       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 17:21:41.968929       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 17:21:41.974314       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 17:21:41.974388       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 17:21:41.989029       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [6d982be55a3fb4becc44a96333efe45cf25ef5dfa2ee1690529fe62988804eaa] <==
	I1018 17:21:02.152841       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 17:21:02.152931       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 17:21:02.153031       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 17:21:02.153052       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 17:21:02.153086       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 17:21:02.153166       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 17:21:02.153228       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-306136"
	I1018 17:21:02.153268       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 17:21:02.153866       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 17:21:02.156011       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 17:21:02.157535       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 17:21:02.161383       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 17:21:02.163106       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 17:21:02.163129       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 17:21:02.163137       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 17:21:02.165138       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 17:21:02.174556       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 17:21:02.178828       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 17:21:02.182069       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 17:21:02.185625       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 17:21:02.188835       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 17:21:02.192144       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 17:21:02.192156       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 17:21:02.192246       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 17:21:02.197497       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [0bd3b67ad9f1a557cb80b58eb40e6835ec2fe7b197a64f190ad0ec3fe541654f] <==
	I1018 17:20:54.253023       1 server_linux.go:53] "Using iptables proxy"
	I1018 17:20:56.165939       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 17:20:58.963032       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 17:20:58.963060       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 17:20:58.963141       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 17:20:59.278825       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 17:20:59.278897       1 server_linux.go:132] "Using iptables Proxier"
	I1018 17:20:59.297151       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 17:20:59.297508       1 server.go:527] "Version info" version="v1.34.1"
	I1018 17:20:59.301772       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 17:20:59.303148       1 config.go:200] "Starting service config controller"
	I1018 17:20:59.303281       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 17:20:59.303347       1 config.go:106] "Starting endpoint slice config controller"
	I1018 17:20:59.303380       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 17:20:59.303421       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 17:20:59.303452       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 17:20:59.304148       1 config.go:309] "Starting node config controller"
	I1018 17:20:59.304199       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 17:20:59.304226       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 17:20:59.404299       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 17:20:59.405508       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 17:20:59.406123       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [e267a0c37054507bf58876033010e5f266b9f0ce1323f245f26779dc04b4688d] <==
	I1018 17:21:39.767570       1 server_linux.go:53] "Using iptables proxy"
	I1018 17:21:39.923388       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 17:21:40.025195       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 17:21:40.025241       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 17:21:40.025351       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 17:21:40.124586       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 17:21:40.124648       1 server_linux.go:132] "Using iptables Proxier"
	I1018 17:21:40.128910       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 17:21:40.129377       1 server.go:527] "Version info" version="v1.34.1"
	I1018 17:21:40.129403       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 17:21:40.133037       1 config.go:200] "Starting service config controller"
	I1018 17:21:40.133125       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 17:21:40.140100       1 config.go:106] "Starting endpoint slice config controller"
	I1018 17:21:40.141511       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 17:21:40.141613       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 17:21:40.141642       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 17:21:40.142624       1 config.go:309] "Starting node config controller"
	I1018 17:21:40.144910       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 17:21:40.145148       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 17:21:40.239104       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 17:21:40.256073       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 17:21:40.256112       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2a8ded35b5e3e6eb66afd0cc534f74b7898b868999d5eaef36dd59e137dc226e] <==
	I1018 17:20:55.367191       1 serving.go:386] Generated self-signed cert in-memory
	I1018 17:20:59.039488       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 17:20:59.045225       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 17:20:59.076478       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 17:20:59.076655       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 17:20:59.076705       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 17:20:59.076776       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 17:20:59.078741       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 17:20:59.085095       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 17:20:59.079501       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 17:20:59.087033       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 17:20:59.177269       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 17:20:59.189051       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 17:20:59.189068       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 17:21:23.887572       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 17:21:23.887601       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 17:21:23.887621       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 17:21:23.887662       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 17:21:23.888209       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 17:21:23.888267       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1018 17:21:23.888699       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 17:21:23.888741       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e06ca0f8997f2889b5e345189583cc0a0b171cd3d02329f700fbba6b98d06896] <==
	I1018 17:21:40.640467       1 serving.go:386] Generated self-signed cert in-memory
	I1018 17:21:42.329044       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 17:21:42.329166       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 17:21:42.335023       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 17:21:42.335430       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 17:21:42.335509       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 17:21:42.335571       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 17:21:42.344772       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 17:21:42.344809       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 17:21:42.344830       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 17:21:42.344837       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 17:21:42.436095       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 17:21:42.445571       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 17:21:42.445630       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 17:29:31 functional-306136 kubelet[4029]: E1018 17:29:31.138139    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hd288" podUID="f5efbc5c-c932-4007-be3f-89964ee5ac25"
	Oct 18 17:29:40 functional-306136 kubelet[4029]: E1018 17:29:40.137630    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vcjkz" podUID="d3e81abb-4d3b-44d8-8c11-c5aef0ae8abe"
	Oct 18 17:29:43 functional-306136 kubelet[4029]: E1018 17:29:43.137737    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hd288" podUID="f5efbc5c-c932-4007-be3f-89964ee5ac25"
	Oct 18 17:29:55 functional-306136 kubelet[4029]: E1018 17:29:55.138640    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vcjkz" podUID="d3e81abb-4d3b-44d8-8c11-c5aef0ae8abe"
	Oct 18 17:29:57 functional-306136 kubelet[4029]: E1018 17:29:57.137574    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hd288" podUID="f5efbc5c-c932-4007-be3f-89964ee5ac25"
	Oct 18 17:30:06 functional-306136 kubelet[4029]: E1018 17:30:06.138301    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vcjkz" podUID="d3e81abb-4d3b-44d8-8c11-c5aef0ae8abe"
	Oct 18 17:30:11 functional-306136 kubelet[4029]: E1018 17:30:11.137758    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hd288" podUID="f5efbc5c-c932-4007-be3f-89964ee5ac25"
	Oct 18 17:30:20 functional-306136 kubelet[4029]: E1018 17:30:20.137821    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vcjkz" podUID="d3e81abb-4d3b-44d8-8c11-c5aef0ae8abe"
	Oct 18 17:30:24 functional-306136 kubelet[4029]: E1018 17:30:24.137956    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hd288" podUID="f5efbc5c-c932-4007-be3f-89964ee5ac25"
	Oct 18 17:30:35 functional-306136 kubelet[4029]: E1018 17:30:35.139119    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hd288" podUID="f5efbc5c-c932-4007-be3f-89964ee5ac25"
	Oct 18 17:30:35 functional-306136 kubelet[4029]: E1018 17:30:35.139969    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vcjkz" podUID="d3e81abb-4d3b-44d8-8c11-c5aef0ae8abe"
	Oct 18 17:30:47 functional-306136 kubelet[4029]: E1018 17:30:47.137812    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hd288" podUID="f5efbc5c-c932-4007-be3f-89964ee5ac25"
	Oct 18 17:30:49 functional-306136 kubelet[4029]: E1018 17:30:49.137967    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vcjkz" podUID="d3e81abb-4d3b-44d8-8c11-c5aef0ae8abe"
	Oct 18 17:30:59 functional-306136 kubelet[4029]: E1018 17:30:59.138000    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hd288" podUID="f5efbc5c-c932-4007-be3f-89964ee5ac25"
	Oct 18 17:31:03 functional-306136 kubelet[4029]: E1018 17:31:03.138087    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vcjkz" podUID="d3e81abb-4d3b-44d8-8c11-c5aef0ae8abe"
	Oct 18 17:31:11 functional-306136 kubelet[4029]: E1018 17:31:11.137721    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hd288" podUID="f5efbc5c-c932-4007-be3f-89964ee5ac25"
	Oct 18 17:31:15 functional-306136 kubelet[4029]: E1018 17:31:15.139117    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vcjkz" podUID="d3e81abb-4d3b-44d8-8c11-c5aef0ae8abe"
	Oct 18 17:31:26 functional-306136 kubelet[4029]: E1018 17:31:26.137718    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hd288" podUID="f5efbc5c-c932-4007-be3f-89964ee5ac25"
	Oct 18 17:31:29 functional-306136 kubelet[4029]: E1018 17:31:29.137645    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vcjkz" podUID="d3e81abb-4d3b-44d8-8c11-c5aef0ae8abe"
	Oct 18 17:31:40 functional-306136 kubelet[4029]: E1018 17:31:40.137733    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hd288" podUID="f5efbc5c-c932-4007-be3f-89964ee5ac25"
	Oct 18 17:31:40 functional-306136 kubelet[4029]: E1018 17:31:40.137733    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vcjkz" podUID="d3e81abb-4d3b-44d8-8c11-c5aef0ae8abe"
	Oct 18 17:31:53 functional-306136 kubelet[4029]: E1018 17:31:53.138388    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hd288" podUID="f5efbc5c-c932-4007-be3f-89964ee5ac25"
	Oct 18 17:31:54 functional-306136 kubelet[4029]: E1018 17:31:54.137506    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vcjkz" podUID="d3e81abb-4d3b-44d8-8c11-c5aef0ae8abe"
	Oct 18 17:32:08 functional-306136 kubelet[4029]: E1018 17:32:08.138383    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hd288" podUID="f5efbc5c-c932-4007-be3f-89964ee5ac25"
	Oct 18 17:32:08 functional-306136 kubelet[4029]: E1018 17:32:08.138900    4029 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-vcjkz" podUID="d3e81abb-4d3b-44d8-8c11-c5aef0ae8abe"
	
	
	==> storage-provisioner [0f142d20739cf269db8cdd288879b10ce46eb4e359b9c83fc592d617c6bb276c] <==
	I1018 17:20:58.209112       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 17:20:59.101726       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 17:20:59.122651       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 17:20:59.147709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:21:02.602687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:21:06.863299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:21:10.462862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:21:13.516349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:21:16.539695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:21:16.545388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 17:21:16.545554       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 17:21:16.545782       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-306136_eb45450f-ca83-4179-9b42-aa0340022f75!
	I1018 17:21:16.546664       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bd4facf5-762c-4261-9e0a-d7b5ad02f9dd", APIVersion:"v1", ResourceVersion:"594", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-306136_eb45450f-ca83-4179-9b42-aa0340022f75 became leader
	W1018 17:21:16.549837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:21:16.555827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 17:21:16.646832       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-306136_eb45450f-ca83-4179-9b42-aa0340022f75!
	W1018 17:21:18.559248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:21:18.564356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:21:20.568201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:21:20.572870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:21:22.575782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:21:22.583441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [6377853198c6d13a21f0c690adafa6b344804ed532568eab137b2e74908c8a83] <==
	W1018 17:31:51.985854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:31:53.989105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:31:53.995704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:31:55.999144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:31:56.008694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:31:58.012758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:31:58.020245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:32:00.032000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:32:00.044228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:32:02.048130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:32:02.055299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:32:04.059287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:32:04.066003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:32:06.069198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:32:06.073915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:32:08.077696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:32:08.082592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:32:10.086140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:32:10.090927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:32:12.094366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:32:12.101142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:32:14.104453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:32:14.109042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:32:16.112584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 17:32:16.117885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-306136 -n functional-306136
helpers_test.go:269: (dbg) Run:  kubectl --context functional-306136 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-hd288 hello-node-connect-7d85dfc575-vcjkz
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-306136 describe pod hello-node-75c85bcc94-hd288 hello-node-connect-7d85dfc575-vcjkz
helpers_test.go:290: (dbg) kubectl --context functional-306136 describe pod hello-node-75c85bcc94-hd288 hello-node-connect-7d85dfc575-vcjkz:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-hd288
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-306136/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 17:22:33 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ffdwk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ffdwk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m44s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hd288 to functional-306136
	  Normal   Pulling    6m58s (x5 over 9m43s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m58s (x5 over 9m43s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m58s (x5 over 9m43s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m36s (x21 over 9m43s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m36s (x21 over 9m43s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-vcjkz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-306136/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 17:22:14 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xct78 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xct78:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-vcjkz to functional-306136
	  Normal   Pulling    6m54s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m54s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m54s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m53s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m38s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-306136 image ls --format table --alsologtostderr:
┌───────┬─────┬──────────┬──────┐
│ IMAGE │ TAG │ IMAGE ID │ SIZE │
└───────┴─────┴──────────┴──────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-306136 image ls --format table --alsologtostderr:
I1018 17:32:47.445732   33289 out.go:360] Setting OutFile to fd 1 ...
I1018 17:32:47.445949   33289 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 17:32:47.445976   33289 out.go:374] Setting ErrFile to fd 2...
I1018 17:32:47.445997   33289 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 17:32:47.446281   33289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
I1018 17:32:47.446908   33289 config.go:182] Loaded profile config "functional-306136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 17:32:47.447081   33289 config.go:182] Loaded profile config "functional-306136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 17:32:47.447581   33289 cli_runner.go:164] Run: docker container inspect functional-306136 --format={{.State.Status}}
I1018 17:32:47.474978   33289 ssh_runner.go:195] Run: systemctl --version
I1018 17:32:47.475044   33289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-306136
I1018 17:32:47.504856   33289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/functional-306136/id_rsa Username:docker}
I1018 17:32:47.612475   33289 ssh_runner.go:195] Run: sudo crictl images --output json
W1018 17:32:47.667894   33289 cache_images.go:735] Failed to list images for profile functional-306136 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1018 17:32:47.665042    7671 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = locating item named \"manifest\" for image with ID \"78d83d981d28e507977e0a724615d827de35ac85de15f9464d6390caa11eda16\" (consider removing the image to resolve the issue): file does not exist" filter="image:{}"
time="2025-10-18T17:32:47Z" level=fatal msg="listing images: rpc error: code = Unknown desc = locating item named \"manifest\" for image with ID \"78d83d981d28e507977e0a724615d827de35ac85de15f9464d6390caa11eda16\" (consider removing the image to resolve the issue): file does not exist"
functional_test.go:290: expected │ registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 image load --daemon kicbase/echo-server:functional-306136 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-306136" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 image load --daemon kicbase/echo-server:functional-306136 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-306136 image load --daemon kicbase/echo-server:functional-306136 --alsologtostderr: (1.018827797s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-306136" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-306136
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 image load --daemon kicbase/echo-server:functional-306136 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-306136" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 image save kicbase/echo-server:functional-306136 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1018 17:22:08.699811   27881 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:22:08.700004   27881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:22:08.700016   27881 out.go:374] Setting ErrFile to fd 2...
	I1018 17:22:08.700021   27881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:22:08.700300   27881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:22:08.700909   27881 config.go:182] Loaded profile config "functional-306136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:22:08.701063   27881 config.go:182] Loaded profile config "functional-306136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:22:08.701551   27881 cli_runner.go:164] Run: docker container inspect functional-306136 --format={{.State.Status}}
	I1018 17:22:08.718930   27881 ssh_runner.go:195] Run: systemctl --version
	I1018 17:22:08.718993   27881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-306136
	I1018 17:22:08.738693   27881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/functional-306136/id_rsa Username:docker}
	I1018 17:22:08.843562   27881 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1018 17:22:08.843619   27881 cache_images.go:254] Failed to load cached images for "functional-306136": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1018 17:22:08.843642   27881 cache_images.go:266] failed pushing to: functional-306136

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-306136
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 image save --daemon kicbase/echo-server:functional-306136 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-306136
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-306136: exit status 1 (16.64072ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-306136

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-306136

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-306136 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-306136 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-hd288" [f5efbc5c-c932-4007-be3f-89964ee5ac25] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1018 17:22:44.404658    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:25:00.530325    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:25:28.246882    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:30:00.529973    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-306136 -n functional-306136
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-18 17:32:34.431861869 +0000 UTC m=+1240.729971874
functional_test.go:1460: (dbg) Run:  kubectl --context functional-306136 describe po hello-node-75c85bcc94-hd288 -n default
functional_test.go:1460: (dbg) kubectl --context functional-306136 describe po hello-node-75c85bcc94-hd288 -n default:
Name:             hello-node-75c85bcc94-hd288
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-306136/192.168.49.2
Start Time:       Sat, 18 Oct 2025 17:22:33 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ffdwk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-ffdwk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hd288 to functional-306136
Normal   Pulling    7m15s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m15s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m15s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m53s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m53s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-306136 logs hello-node-75c85bcc94-hd288 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-306136 logs hello-node-75c85bcc94-hd288 -n default: exit status 1 (120.928527ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-hd288" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-306136 logs hello-node-75c85bcc94-hd288 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-306136 service --namespace=default --https --url hello-node: exit status 115 (471.027622ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30376
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-306136 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-306136 service hello-node --url --format={{.IP}}: exit status 115 (499.084375ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-306136 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-306136 service hello-node --url: exit status 115 (520.640472ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30376
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-306136 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30376
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (504.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-181800 stop --alsologtostderr -v 5: (26.62297985s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 start --wait true --alsologtostderr -v 5
E1018 17:39:47.514526    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:40:00.530819    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:42:03.655202    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:42:31.356186    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:45:00.530764    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:47:03.654970    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-181800 start --wait true --alsologtostderr -v 5: exit status 105 (7m51.233287737s)

                                                
                                                
-- stdout --
	* [ha-181800] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-181800" primary control-plane node in "ha-181800" cluster
	* Pulling base image v0.0.48-1760609789-21757 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Enabled addons: 
	
	* Starting "ha-181800-m02" control-plane node in "ha-181800" cluster
	* Pulling base image v0.0.48-1760609789-21757 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 17:39:45.975281   51251 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:39:45.975504   51251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:39:45.975531   51251 out.go:374] Setting ErrFile to fd 2...
	I1018 17:39:45.975549   51251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:39:45.975846   51251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:39:45.976262   51251 out.go:368] Setting JSON to false
	I1018 17:39:45.977169   51251 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4935,"bootTime":1760804251,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 17:39:45.977269   51251 start.go:141] virtualization:  
	I1018 17:39:45.980610   51251 out.go:179] * [ha-181800] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 17:39:45.984311   51251 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 17:39:45.984374   51251 notify.go:220] Checking for updates...
	I1018 17:39:45.990274   51251 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 17:39:45.993215   51251 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:39:45.996106   51251 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 17:39:45.999014   51251 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 17:39:46.004420   51251 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 17:39:46.008306   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:39:46.008436   51251 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 17:39:46.042019   51251 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 17:39:46.042131   51251 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:39:46.099091   51251 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 17:39:46.089556228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:39:46.099210   51251 docker.go:318] overlay module found
	I1018 17:39:46.102259   51251 out.go:179] * Using the docker driver based on existing profile
	I1018 17:39:46.105078   51251 start.go:305] selected driver: docker
	I1018 17:39:46.105099   51251 start.go:925] validating driver "docker" against &{Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:39:46.105237   51251 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 17:39:46.105338   51251 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:39:46.159602   51251 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 17:39:46.150874009 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:39:46.159982   51251 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:39:46.160020   51251 cni.go:84] Creating CNI manager for ""
	I1018 17:39:46.160080   51251 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1018 17:39:46.160126   51251 start.go:349] cluster config:
	{Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:39:46.165176   51251 out.go:179] * Starting "ha-181800" primary control-plane node in "ha-181800" cluster
	I1018 17:39:46.168051   51251 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:39:46.170939   51251 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:39:46.173836   51251 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:39:46.173896   51251 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 17:39:46.173911   51251 cache.go:58] Caching tarball of preloaded images
	I1018 17:39:46.173925   51251 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:39:46.173990   51251 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:39:46.174000   51251 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:39:46.174155   51251 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:39:46.192746   51251 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:39:46.192769   51251 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:39:46.192782   51251 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:39:46.192803   51251 start.go:360] acquireMachinesLock for ha-181800: {Name:mk3f5dfba2ab7d01f94f924dfcc5edab5f076901 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:39:46.192864   51251 start.go:364] duration metric: took 36.243µs to acquireMachinesLock for "ha-181800"
	I1018 17:39:46.192888   51251 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:39:46.192896   51251 fix.go:54] fixHost starting: 
	I1018 17:39:46.193211   51251 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:39:46.209470   51251 fix.go:112] recreateIfNeeded on ha-181800: state=Stopped err=<nil>
	W1018 17:39:46.209498   51251 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:39:46.212825   51251 out.go:252] * Restarting existing docker container for "ha-181800" ...
	I1018 17:39:46.212900   51251 cli_runner.go:164] Run: docker start ha-181800
	I1018 17:39:46.480673   51251 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:39:46.500591   51251 kic.go:430] container "ha-181800" state is running.
	I1018 17:39:46.501011   51251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:39:46.526396   51251 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:39:46.526638   51251 machine.go:93] provisionDockerMachine start ...
	I1018 17:39:46.526707   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:46.546472   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:46.546909   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1018 17:39:46.546927   51251 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:39:46.547526   51251 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 17:39:49.696893   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800
	
	I1018 17:39:49.696925   51251 ubuntu.go:182] provisioning hostname "ha-181800"
	I1018 17:39:49.697031   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:49.714524   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:49.714832   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1018 17:39:49.714849   51251 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181800 && echo "ha-181800" | sudo tee /etc/hostname
	I1018 17:39:49.873528   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800
	
	I1018 17:39:49.873612   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:49.891188   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:49.891504   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1018 17:39:49.891521   51251 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181800/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:39:50.037199   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:39:50.037228   51251 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:39:50.037247   51251 ubuntu.go:190] setting up certificates
	I1018 17:39:50.037257   51251 provision.go:84] configureAuth start
	I1018 17:39:50.037320   51251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:39:50.055129   51251 provision.go:143] copyHostCerts
	I1018 17:39:50.055181   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:39:50.055213   51251 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:39:50.055234   51251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:39:50.055314   51251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:39:50.055408   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:39:50.055430   51251 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:39:50.055438   51251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:39:50.055466   51251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:39:50.055525   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:39:50.055546   51251 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:39:50.055555   51251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:39:50.055581   51251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:39:50.055647   51251 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.ha-181800 san=[127.0.0.1 192.168.49.2 ha-181800 localhost minikube]
	I1018 17:39:50.382522   51251 provision.go:177] copyRemoteCerts
	I1018 17:39:50.382593   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:39:50.382633   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:50.403959   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:39:50.508789   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 17:39:50.508850   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:39:50.526450   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 17:39:50.526538   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1018 17:39:50.544187   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 17:39:50.544274   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:39:50.561987   51251 provision.go:87] duration metric: took 524.706666ms to configureAuth
	I1018 17:39:50.562063   51251 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:39:50.562317   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:39:50.562424   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:50.578939   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:50.579244   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1018 17:39:50.579264   51251 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:39:50.937128   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:39:50.937197   51251 machine.go:96] duration metric: took 4.410541s to provisionDockerMachine
	I1018 17:39:50.937222   51251 start.go:293] postStartSetup for "ha-181800" (driver="docker")
	I1018 17:39:50.937247   51251 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:39:50.937359   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:39:50.937444   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:50.959339   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:39:51.065300   51251 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:39:51.068761   51251 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:39:51.068792   51251 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:39:51.068803   51251 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:39:51.068858   51251 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:39:51.068963   51251 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:39:51.068976   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 17:39:51.069076   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 17:39:51.076928   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:39:51.094473   51251 start.go:296] duration metric: took 157.222631ms for postStartSetup
	I1018 17:39:51.094579   51251 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:39:51.094625   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:51.113220   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:39:51.213567   51251 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:39:51.218175   51251 fix.go:56] duration metric: took 5.025272015s for fixHost
	I1018 17:39:51.218200   51251 start.go:83] releasing machines lock for "ha-181800", held for 5.025323101s
	I1018 17:39:51.218283   51251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:39:51.235815   51251 ssh_runner.go:195] Run: cat /version.json
	I1018 17:39:51.235850   51251 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:39:51.235866   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:51.235904   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:51.261163   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:39:51.270603   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:39:51.360468   51251 ssh_runner.go:195] Run: systemctl --version
	I1018 17:39:51.454722   51251 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:39:51.498840   51251 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:39:51.503695   51251 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:39:51.503796   51251 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:39:51.511526   51251 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:39:51.511549   51251 start.go:495] detecting cgroup driver to use...
	I1018 17:39:51.511578   51251 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:39:51.511630   51251 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:39:51.526599   51251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:39:51.539484   51251 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:39:51.539576   51251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:39:51.554963   51251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:39:51.568183   51251 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:39:51.676636   51251 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:39:51.792230   51251 docker.go:234] disabling docker service ...
	I1018 17:39:51.792306   51251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:39:51.806847   51251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:39:51.819137   51251 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:39:51.938883   51251 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:39:52.058796   51251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:39:52.072487   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:39:52.088092   51251 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:39:52.088205   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.097568   51251 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:39:52.097729   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.107431   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.116597   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.125822   51251 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:39:52.134598   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.143667   51251 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.151898   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.160172   51251 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:39:52.167407   51251 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:39:52.174657   51251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:39:52.287403   51251 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:39:52.421729   51251 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:39:52.421850   51251 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:39:52.425707   51251 start.go:563] Will wait 60s for crictl version
	I1018 17:39:52.425813   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:39:52.429420   51251 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:39:52.453867   51251 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:39:52.453974   51251 ssh_runner.go:195] Run: crio --version
	I1018 17:39:52.486777   51251 ssh_runner.go:195] Run: crio --version
	I1018 17:39:52.520354   51251 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:39:52.523389   51251 cli_runner.go:164] Run: docker network inspect ha-181800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:39:52.539892   51251 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:39:52.543780   51251 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:39:52.553416   51251 kubeadm.go:883] updating cluster {Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 17:39:52.553576   51251 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:39:52.553634   51251 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 17:39:52.588251   51251 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 17:39:52.588276   51251 crio.go:433] Images already preloaded, skipping extraction
	I1018 17:39:52.588335   51251 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 17:39:52.613957   51251 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 17:39:52.613979   51251 cache_images.go:85] Images are preloaded, skipping loading
	I1018 17:39:52.613989   51251 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 17:39:52.614102   51251 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-181800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:39:52.614189   51251 ssh_runner.go:195] Run: crio config
	I1018 17:39:52.670252   51251 cni.go:84] Creating CNI manager for ""
	I1018 17:39:52.670275   51251 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1018 17:39:52.670294   51251 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 17:39:52.670319   51251 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-181800 NodeName:ha-181800 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 17:39:52.670455   51251 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-181800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 17:39:52.670475   51251 kube-vip.go:115] generating kube-vip config ...
	I1018 17:39:52.670529   51251 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 17:39:52.682279   51251 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:39:52.682377   51251 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 17:39:52.682436   51251 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:39:52.689950   51251 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:39:52.690041   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1018 17:39:52.697809   51251 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1018 17:39:52.710709   51251 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:39:52.723367   51251 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1018 17:39:52.735890   51251 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 17:39:52.748648   51251 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 17:39:52.752220   51251 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:39:52.762098   51251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:39:52.871320   51251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:39:52.886583   51251 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800 for IP: 192.168.49.2
	I1018 17:39:52.886603   51251 certs.go:195] generating shared ca certs ...
	I1018 17:39:52.886618   51251 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:39:52.886785   51251 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:39:52.886838   51251 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:39:52.886849   51251 certs.go:257] generating profile certs ...
	I1018 17:39:52.886923   51251 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key
	I1018 17:39:52.886953   51251 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.46a58690
	I1018 17:39:52.886970   51251 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt.46a58690 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1018 17:39:53.268315   51251 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt.46a58690 ...
	I1018 17:39:53.268348   51251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt.46a58690: {Name:mk0cc861493b9d286eed0bfb736b15e28a1706f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:39:53.268572   51251 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.46a58690 ...
	I1018 17:39:53.268589   51251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.46a58690: {Name:mk424cb4f615a1903e846801cb9cb2e734afdfb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:39:53.268677   51251 certs.go:382] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt.46a58690 -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt
	I1018 17:39:53.268822   51251 certs.go:386] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.46a58690 -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key
	I1018 17:39:53.268969   51251 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key
	I1018 17:39:53.268988   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 17:39:53.269005   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 17:39:53.269023   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 17:39:53.269043   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 17:39:53.269070   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 17:39:53.269094   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 17:39:53.269112   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 17:39:53.269123   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 17:39:53.269179   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:39:53.269213   51251 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:39:53.269225   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:39:53.269249   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:39:53.269273   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:39:53.269299   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:39:53.269346   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:39:53.269376   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 17:39:53.269392   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:39:53.269403   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 17:39:53.269946   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:39:53.289258   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:39:53.307330   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:39:53.325012   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:39:53.342168   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 17:39:53.359559   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 17:39:53.376235   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 17:39:53.393388   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 17:39:53.409944   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:39:53.427591   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:39:53.443532   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:39:53.459786   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 17:39:53.472627   51251 ssh_runner.go:195] Run: openssl version
	I1018 17:39:53.478997   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:39:53.486807   51251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:39:53.490229   51251 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:39:53.490289   51251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:39:53.534916   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:39:53.547040   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:39:53.561930   51251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:39:53.567602   51251 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:39:53.567707   51251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:39:53.617018   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:39:53.628559   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:39:53.641445   51251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:39:53.645568   51251 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:39:53.645680   51251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:39:53.715014   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:39:53.744004   51251 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:39:53.751940   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 17:39:53.829686   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 17:39:53.890601   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 17:39:53.957371   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 17:39:54.017003   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 17:39:54.064655   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 17:39:54.111921   51251 kubeadm.go:400] StartCluster: {Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:39:54.112099   51251 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:39:54.112174   51251 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:39:54.163162   51251 cri.go:89] found id: "dda012a63c45a5c37a124da696c59f0ac82f51c6728ee30f5a6b3a9df6f28b54"
	I1018 17:39:54.163230   51251 cri.go:89] found id: "ac8ef32697a356e273cd1b84ce23b6e628c802ef7b211f001fc50bb472635814"
	I1018 17:39:54.163250   51251 cri.go:89] found id: "4957aae3df6cdc996ba2129d1f43210ebdec1c480e6db0115ee34f32691af151"
	I1018 17:39:54.163265   51251 cri.go:89] found id: "6e9b6c2f0e69c56776af6be092e8313aef540b7319fd0664f3eb3f947353a66b"
	I1018 17:39:54.163282   51251 cri.go:89] found id: "a0776ff98d8411ec5ae52a11de472cb17e1d8c764d642bf18a22aec8b44a08ee"
	I1018 17:39:54.163300   51251 cri.go:89] found id: ""
	I1018 17:39:54.163370   51251 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 17:39:54.178952   51251 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:39:54Z" level=error msg="open /run/runc: no such file or directory"
	I1018 17:39:54.179088   51251 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 17:39:54.202035   51251 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 17:39:54.202104   51251 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 17:39:54.202180   51251 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 17:39:54.218306   51251 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:39:54.218743   51251 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-181800" does not appear in /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:39:54.218882   51251 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-2509/kubeconfig needs updating (will repair): [kubeconfig missing "ha-181800" cluster setting kubeconfig missing "ha-181800" context setting]
	I1018 17:39:54.219252   51251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:39:54.219794   51251 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 17:39:54.220519   51251 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 17:39:54.220606   51251 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 17:39:54.220635   51251 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 17:39:54.220585   51251 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1018 17:39:54.220726   51251 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 17:39:54.220753   51251 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 17:39:54.221075   51251 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 17:39:54.234375   51251 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1018 17:39:54.234436   51251 kubeadm.go:601] duration metric: took 32.30335ms to restartPrimaryControlPlane
	I1018 17:39:54.234460   51251 kubeadm.go:402] duration metric: took 122.54698ms to StartCluster
	I1018 17:39:54.234487   51251 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:39:54.234565   51251 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:39:54.235140   51251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:39:54.235365   51251 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:39:54.235417   51251 start.go:241] waiting for startup goroutines ...
	I1018 17:39:54.235446   51251 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 17:39:54.235957   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:39:54.241374   51251 out.go:179] * Enabled addons: 
	I1018 17:39:54.244317   51251 addons.go:514] duration metric: took 8.873213ms for enable addons: enabled=[]
	I1018 17:39:54.244381   51251 start.go:246] waiting for cluster config update ...
	I1018 17:39:54.244403   51251 start.go:255] writing updated cluster config ...
	I1018 17:39:54.247646   51251 out.go:203] 
	I1018 17:39:54.250620   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:39:54.250787   51251 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:39:54.254182   51251 out.go:179] * Starting "ha-181800-m02" control-plane node in "ha-181800" cluster
	I1018 17:39:54.257073   51251 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:39:54.259992   51251 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:39:54.262894   51251 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:39:54.262941   51251 cache.go:58] Caching tarball of preloaded images
	I1018 17:39:54.263061   51251 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:39:54.263094   51251 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:39:54.263229   51251 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:39:54.263458   51251 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:39:54.291252   51251 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:39:54.291269   51251 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:39:54.291282   51251 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:39:54.291303   51251 start.go:360] acquireMachinesLock for ha-181800-m02: {Name:mk36a488c0fbfc8557c6ba291b969aad85b45635 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:39:54.291352   51251 start.go:364] duration metric: took 33.977µs to acquireMachinesLock for "ha-181800-m02"
	I1018 17:39:54.291370   51251 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:39:54.291375   51251 fix.go:54] fixHost starting: m02
	I1018 17:39:54.291629   51251 cli_runner.go:164] Run: docker container inspect ha-181800-m02 --format={{.State.Status}}
	I1018 17:39:54.318512   51251 fix.go:112] recreateIfNeeded on ha-181800-m02: state=Stopped err=<nil>
	W1018 17:39:54.318536   51251 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:39:54.321781   51251 out.go:252] * Restarting existing docker container for "ha-181800-m02" ...
	I1018 17:39:54.321859   51251 cli_runner.go:164] Run: docker start ha-181800-m02
	I1018 17:39:54.692758   51251 cli_runner.go:164] Run: docker container inspect ha-181800-m02 --format={{.State.Status}}
	I1018 17:39:54.723920   51251 kic.go:430] container "ha-181800-m02" state is running.
	I1018 17:39:54.724263   51251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:39:54.749215   51251 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:39:54.749467   51251 machine.go:93] provisionDockerMachine start ...
	I1018 17:39:54.749523   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:39:54.781536   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:54.781830   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1018 17:39:54.781839   51251 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:39:54.782427   51251 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39794->127.0.0.1:32813: read: connection reset by peer
	I1018 17:39:58.082162   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m02
	
	I1018 17:39:58.082184   51251 ubuntu.go:182] provisioning hostname "ha-181800-m02"
	I1018 17:39:58.082261   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:39:58.126530   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:58.126844   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1018 17:39:58.126855   51251 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181800-m02 && echo "ha-181800-m02" | sudo tee /etc/hostname
	I1018 17:39:58.443573   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m02
	
	I1018 17:39:58.443690   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:39:58.478907   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:58.479213   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1018 17:39:58.479243   51251 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181800-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181800-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181800-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:39:58.737653   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:39:58.737680   51251 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:39:58.737725   51251 ubuntu.go:190] setting up certificates
	I1018 17:39:58.737736   51251 provision.go:84] configureAuth start
	I1018 17:39:58.737818   51251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:39:58.774675   51251 provision.go:143] copyHostCerts
	I1018 17:39:58.774718   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:39:58.774757   51251 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:39:58.774769   51251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:39:58.774848   51251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:39:58.774946   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:39:58.774970   51251 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:39:58.774977   51251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:39:58.775018   51251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:39:58.775074   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:39:58.775100   51251 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:39:58.775109   51251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:39:58.775135   51251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:39:58.775197   51251 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.ha-181800-m02 san=[127.0.0.1 192.168.49.3 ha-181800-m02 localhost minikube]
	I1018 17:39:59.196567   51251 provision.go:177] copyRemoteCerts
	I1018 17:39:59.197114   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:39:59.197174   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:39:59.222600   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:39:59.394297   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 17:39:59.394389   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:39:59.450203   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 17:39:59.450288   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 17:39:59.513512   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 17:39:59.513624   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:39:59.573995   51251 provision.go:87] duration metric: took 836.238905ms to configureAuth
	I1018 17:39:59.574021   51251 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:39:59.574290   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:39:59.574415   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:39:59.606597   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:59.606908   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1018 17:39:59.606927   51251 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:40:00.196427   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:40:00.196520   51251 machine.go:96] duration metric: took 5.447042221s to provisionDockerMachine
	I1018 17:40:00.196547   51251 start.go:293] postStartSetup for "ha-181800-m02" (driver="docker")
	I1018 17:40:00.196572   51251 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:40:00.196694   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:40:00.196782   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:40:00.238873   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:40:00.392500   51251 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:40:00.403930   51251 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:40:00.403959   51251 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:40:00.403971   51251 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:40:00.404043   51251 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:40:00.404125   51251 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:40:00.404133   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 17:40:00.404244   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 17:40:00.423321   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:40:00.459796   51251 start.go:296] duration metric: took 263.21852ms for postStartSetup
	I1018 17:40:00.459966   51251 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:40:00.460049   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:40:00.503330   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:40:00.631049   51251 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:40:00.645680   51251 fix.go:56] duration metric: took 6.354295561s for fixHost
	I1018 17:40:00.645709   51251 start.go:83] releasing machines lock for "ha-181800-m02", held for 6.35434937s
	I1018 17:40:00.645791   51251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:40:00.682830   51251 out.go:179] * Found network options:
	I1018 17:40:00.685894   51251 out.go:179]   - NO_PROXY=192.168.49.2
	W1018 17:40:00.688804   51251 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:40:00.688858   51251 proxy.go:120] fail to check proxy env: Error ip not in block
	I1018 17:40:00.688930   51251 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:40:00.689085   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:40:00.689351   51251 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:40:00.689409   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:40:00.730142   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:40:00.730174   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:40:01.294197   51251 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:40:01.312592   51251 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:40:01.312744   51251 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:40:01.330228   51251 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:40:01.330302   51251 start.go:495] detecting cgroup driver to use...
	I1018 17:40:01.330348   51251 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:40:01.330425   51251 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:40:01.357073   51251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:40:01.416356   51251 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:40:01.416475   51251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:40:01.453551   51251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:40:01.481435   51251 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:40:01.742441   51251 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:40:01.978817   51251 docker.go:234] disabling docker service ...
	I1018 17:40:01.978936   51251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:40:02.001514   51251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:40:02.021678   51251 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:40:02.249968   51251 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:40:02.480556   51251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:40:02.498908   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:40:02.526424   51251 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:40:02.526493   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.542071   51251 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:40:02.542141   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.559770   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.574006   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.589455   51251 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:40:02.598587   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.612076   51251 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.624069   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.637136   51251 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:40:02.652415   51251 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:40:02.662181   51251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:40:02.863894   51251 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:41:33.166156   51251 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.302227656s)
	I1018 17:41:33.166194   51251 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:41:33.166252   51251 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:41:33.170771   51251 start.go:563] Will wait 60s for crictl version
	I1018 17:41:33.170830   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:41:33.176098   51251 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:41:33.213255   51251 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:41:33.213351   51251 ssh_runner.go:195] Run: crio --version
	I1018 17:41:33.258540   51251 ssh_runner.go:195] Run: crio --version
	I1018 17:41:33.296286   51251 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:41:33.299353   51251 out.go:179]   - env NO_PROXY=192.168.49.2
	I1018 17:41:33.302220   51251 cli_runner.go:164] Run: docker network inspect ha-181800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:41:33.319775   51251 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:41:33.324290   51251 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:41:33.336317   51251 mustload.go:65] Loading cluster: ha-181800
	I1018 17:41:33.336557   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:41:33.336817   51251 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:41:33.362604   51251 host.go:66] Checking if "ha-181800" exists ...
	I1018 17:41:33.362892   51251 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800 for IP: 192.168.49.3
	I1018 17:41:33.362901   51251 certs.go:195] generating shared ca certs ...
	I1018 17:41:33.362915   51251 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:41:33.363034   51251 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:41:33.363081   51251 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:41:33.363088   51251 certs.go:257] generating profile certs ...
	I1018 17:41:33.363157   51251 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key
	I1018 17:41:33.363222   51251 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.887e0b27
	I1018 17:41:33.363266   51251 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key
	I1018 17:41:33.363274   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 17:41:33.363286   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 17:41:33.363296   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 17:41:33.363306   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 17:41:33.363316   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 17:41:33.363328   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 17:41:33.363338   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 17:41:33.363348   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 17:41:33.363398   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:41:33.363424   51251 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:41:33.363433   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:41:33.363455   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:41:33.363476   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:41:33.363496   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:41:33.363536   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:41:33.363565   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:41:33.363579   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 17:41:33.363590   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 17:41:33.363643   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:41:33.388336   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:41:33.489250   51251 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1018 17:41:33.493494   51251 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1018 17:41:33.511835   51251 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1018 17:41:33.515898   51251 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1018 17:41:33.524188   51251 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1018 17:41:33.527936   51251 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1018 17:41:33.536545   51251 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1018 17:41:33.540347   51251 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1018 17:41:33.549002   51251 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1018 17:41:33.552698   51251 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1018 17:41:33.561692   51251 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1018 17:41:33.565522   51251 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1018 17:41:33.574471   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:41:33.598033   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:41:33.620604   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:41:33.644520   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:41:33.671246   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 17:41:33.694599   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 17:41:33.716649   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 17:41:33.739805   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 17:41:33.761744   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:41:33.784279   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:41:33.807665   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:41:33.831497   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1018 17:41:33.845903   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1018 17:41:33.860149   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1018 17:41:33.874010   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1018 17:41:33.893500   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1018 17:41:33.908151   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1018 17:41:33.922971   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1018 17:41:33.937486   51251 ssh_runner.go:195] Run: openssl version
	I1018 17:41:33.944301   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:41:33.953654   51251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:41:33.958036   51251 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:41:33.958171   51251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:41:34.004993   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:41:34.015337   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:41:34.024718   51251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:41:34.029508   51251 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:41:34.029667   51251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:41:34.076487   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:41:34.085949   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:41:34.095637   51251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:41:34.100153   51251 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:41:34.100269   51251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:41:34.148268   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:41:34.158037   51251 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:41:34.162480   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 17:41:34.206936   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 17:41:34.251076   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 17:41:34.294598   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 17:41:34.337252   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 17:41:34.379050   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 17:41:34.422861   51251 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1018 17:41:34.423031   51251 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-181800-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:41:34.423078   51251 kube-vip.go:115] generating kube-vip config ...
	I1018 17:41:34.423166   51251 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 17:41:34.435895   51251 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:41:34.435996   51251 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 17:41:34.436081   51251 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:41:34.444655   51251 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:41:34.444772   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1018 17:41:34.452743   51251 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 17:41:34.466348   51251 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:41:34.479899   51251 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 17:41:34.497063   51251 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 17:41:34.500892   51251 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:41:34.516267   51251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:41:34.674326   51251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:41:34.690850   51251 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:41:34.691288   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:41:34.696864   51251 out.go:179] * Verifying Kubernetes components...
	I1018 17:41:34.699590   51251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:41:34.858485   51251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:41:34.875760   51251 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1018 17:41:34.876060   51251 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1018 17:41:34.876378   51251 node_ready.go:35] waiting up to 6m0s for node "ha-181800-m02" to be "Ready" ...
	I1018 17:41:41.842514   51251 node_ready.go:49] node "ha-181800-m02" is "Ready"
	I1018 17:41:41.842547   51251 node_ready.go:38] duration metric: took 6.966151068s for node "ha-181800-m02" to be "Ready" ...
	I1018 17:41:41.842561   51251 api_server.go:52] waiting for apiserver process to appear ...
	I1018 17:41:41.842620   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:42.343686   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:42.843043   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:43.343313   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:43.843326   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:44.343648   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:44.843315   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:45.342911   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:45.842777   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:46.343420   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:46.843693   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:47.342746   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:47.843464   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:48.342878   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:48.843391   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:49.342759   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:49.843483   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:50.342789   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:50.842761   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:51.342785   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:51.843356   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:52.342785   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:52.843177   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:53.342698   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:53.842872   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:54.343544   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:54.842904   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:55.343425   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:55.843434   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:56.343297   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:56.843518   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:57.343357   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:57.842816   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:58.343642   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:58.842783   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:59.343043   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:59.843412   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:00.342951   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:00.843389   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:01.342774   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:01.842787   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:02.343236   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:02.842685   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:03.342751   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:03.843695   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:04.342729   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:04.843543   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:05.343721   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:05.843447   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:06.342743   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:06.842790   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:07.343656   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:07.843541   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:08.343267   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:08.843707   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:09.342771   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:09.843748   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:10.342856   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:10.842752   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:11.343307   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:11.842677   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:12.343443   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:12.843733   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:13.343641   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:13.842734   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:14.343649   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:14.842779   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:15.342756   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:15.842763   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:16.343741   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:16.842779   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:17.342825   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:17.843340   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:18.342759   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:18.842772   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:19.342755   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:19.842777   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:20.343137   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:20.843594   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:21.343397   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:21.843388   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:22.342798   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:22.843107   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:23.343587   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:23.842910   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:24.343458   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:24.843264   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:25.342775   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:25.842894   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:26.343732   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:26.842775   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:27.342787   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:27.842760   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:28.342772   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:28.843266   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:29.343220   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:29.843228   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:30.343087   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:30.842732   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:31.342878   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:31.843084   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:32.343181   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:32.843480   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:33.343320   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:33.842755   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:34.342929   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:34.842842   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:34.842930   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:34.869988   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:34.870010   51251 cri.go:89] found id: ""
	I1018 17:42:34.870018   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:34.870073   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:34.873710   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:34.873778   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:34.899173   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:34.899196   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:34.899202   51251 cri.go:89] found id: ""
	I1018 17:42:34.899209   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:34.899263   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:34.903214   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:34.906828   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:34.906903   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:34.933625   51251 cri.go:89] found id: ""
	I1018 17:42:34.933648   51251 logs.go:282] 0 containers: []
	W1018 17:42:34.933656   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:34.933663   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:34.933723   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:34.959655   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:34.959675   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:34.959680   51251 cri.go:89] found id: ""
	I1018 17:42:34.959688   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:34.959743   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:34.972509   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:34.977434   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:34.977506   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:35.014139   51251 cri.go:89] found id: ""
	I1018 17:42:35.014165   51251 logs.go:282] 0 containers: []
	W1018 17:42:35.014173   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:35.014180   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:35.014287   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:35.047968   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:35.047993   51251 cri.go:89] found id: ""
	I1018 17:42:35.048002   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:35.048056   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:35.052096   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:35.052159   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:35.087604   51251 cri.go:89] found id: ""
	I1018 17:42:35.087628   51251 logs.go:282] 0 containers: []
	W1018 17:42:35.087636   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:35.087645   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:35.087658   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:35.135319   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:35.135352   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:35.186498   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:35.186531   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:35.217338   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:35.217381   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:35.327154   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:35.327184   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:35.341645   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:35.341672   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:35.747254   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:35.739248    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.739909    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.741574    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.742106    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.743686    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:35.739248    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.739909    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.741574    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.742106    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.743686    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:35.747277   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:35.747291   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:35.784796   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:35.784825   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:35.811760   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:35.811786   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:35.886991   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:35.887025   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:35.921904   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:35.921933   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:38.449291   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:38.459790   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:38.459857   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:38.486350   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:38.486373   51251 cri.go:89] found id: ""
	I1018 17:42:38.486383   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:38.486444   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:38.490359   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:38.490430   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:38.518049   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:38.518073   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:38.518078   51251 cri.go:89] found id: ""
	I1018 17:42:38.518097   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:38.518156   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:38.522183   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:38.526138   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:38.526213   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:38.552857   51251 cri.go:89] found id: ""
	I1018 17:42:38.552881   51251 logs.go:282] 0 containers: []
	W1018 17:42:38.552890   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:38.552896   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:38.552996   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:38.581427   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:38.581447   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:38.581452   51251 cri.go:89] found id: ""
	I1018 17:42:38.581460   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:38.581516   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:38.585308   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:38.588834   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:38.588907   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:38.626035   51251 cri.go:89] found id: ""
	I1018 17:42:38.626060   51251 logs.go:282] 0 containers: []
	W1018 17:42:38.626068   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:38.626074   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:38.626180   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:38.654519   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:38.654541   51251 cri.go:89] found id: ""
	I1018 17:42:38.654549   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:38.654606   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:38.659468   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:38.659536   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:38.685688   51251 cri.go:89] found id: ""
	I1018 17:42:38.685717   51251 logs.go:282] 0 containers: []
	W1018 17:42:38.685726   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:38.685735   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:38.685747   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:38.783795   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:38.783829   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:38.826341   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:38.826373   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:38.860295   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:38.860328   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:38.914363   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:38.914398   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:38.945563   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:38.945589   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:38.986953   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:38.986976   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:39.069689   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:39.069729   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:39.111763   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:39.111827   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:39.125634   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:39.125711   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:39.199836   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:39.189569    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.190870    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.192604    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.193407    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.194944    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:39.189569    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.190870    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.192604    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.193407    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.194944    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:39.199901   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:39.199927   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:41.727280   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:41.737746   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:41.737830   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:41.764569   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:41.764587   51251 cri.go:89] found id: ""
	I1018 17:42:41.764595   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:41.764651   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:41.768619   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:41.768692   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:41.795219   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:41.795239   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:41.795244   51251 cri.go:89] found id: ""
	I1018 17:42:41.795251   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:41.795315   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:41.799045   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:41.802635   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:41.802708   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:41.829223   51251 cri.go:89] found id: ""
	I1018 17:42:41.829246   51251 logs.go:282] 0 containers: []
	W1018 17:42:41.829256   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:41.829262   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:41.829319   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:41.863591   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:41.863612   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:41.863617   51251 cri.go:89] found id: ""
	I1018 17:42:41.863625   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:41.863708   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:41.867633   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:41.871288   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:41.871365   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:41.907130   51251 cri.go:89] found id: ""
	I1018 17:42:41.907154   51251 logs.go:282] 0 containers: []
	W1018 17:42:41.907162   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:41.907179   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:41.907239   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:41.937193   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:41.937215   51251 cri.go:89] found id: ""
	I1018 17:42:41.937223   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:41.937281   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:41.941168   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:41.941244   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:41.993845   51251 cri.go:89] found id: ""
	I1018 17:42:41.993923   51251 logs.go:282] 0 containers: []
	W1018 17:42:41.993944   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:41.993955   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:41.993967   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:42.041265   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:42.041296   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:42.070875   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:42.070904   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:42.106610   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:42.106642   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:42.194367   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:42.194403   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:42.229250   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:42.229279   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:42.283222   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:42.283254   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:42.343661   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:42.343694   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:42.376582   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:42.376608   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:42.475562   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:42.475597   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:42.488812   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:42.488842   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:42.564172   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:42.556222    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.556691    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.558297    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.558653    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.560347    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:42.556222    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.556691    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.558297    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.558653    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.560347    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:45.065078   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:45.086837   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:45.086979   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:45.165006   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:45.165027   51251 cri.go:89] found id: ""
	I1018 17:42:45.165035   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:45.165103   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:45.172323   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:45.172423   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:45.217483   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:45.217515   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:45.217521   51251 cri.go:89] found id: ""
	I1018 17:42:45.217530   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:45.217596   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:45.223128   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:45.227931   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:45.228025   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:45.283738   51251 cri.go:89] found id: ""
	I1018 17:42:45.283769   51251 logs.go:282] 0 containers: []
	W1018 17:42:45.283789   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:45.283818   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:45.283897   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:45.321652   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:45.321679   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:45.321685   51251 cri.go:89] found id: ""
	I1018 17:42:45.321694   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:45.321760   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:45.332292   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:45.337760   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:45.338055   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:45.381645   51251 cri.go:89] found id: ""
	I1018 17:42:45.381666   51251 logs.go:282] 0 containers: []
	W1018 17:42:45.381675   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:45.381681   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:45.381740   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:45.413702   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:45.413726   51251 cri.go:89] found id: ""
	I1018 17:42:45.413735   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:45.413793   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:45.417551   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:45.417654   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:45.444154   51251 cri.go:89] found id: ""
	I1018 17:42:45.444178   51251 logs.go:282] 0 containers: []
	W1018 17:42:45.444186   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:45.444195   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:45.444206   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:45.537154   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:45.537189   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:45.618318   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:45.608985    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.610405    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.610978    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.612722    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.613098    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:45.608985    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.610405    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.610978    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.612722    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.613098    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:45.618339   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:45.618352   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:45.643567   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:45.643592   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:45.680148   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:45.680183   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:45.732576   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:45.732648   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:45.763213   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:45.763299   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:45.790736   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:45.790804   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:45.802909   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:45.802991   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:45.850168   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:45.850251   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:45.926703   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:45.926741   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:48.486114   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:48.497086   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:48.497160   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:48.525605   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:48.525625   51251 cri.go:89] found id: ""
	I1018 17:42:48.525634   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:48.525690   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:48.529399   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:48.529536   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:48.556240   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:48.556261   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:48.556267   51251 cri.go:89] found id: ""
	I1018 17:42:48.556274   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:48.556331   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:48.560148   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:48.563747   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:48.563816   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:48.591484   51251 cri.go:89] found id: ""
	I1018 17:42:48.591509   51251 logs.go:282] 0 containers: []
	W1018 17:42:48.591518   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:48.591524   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:48.591584   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:48.621441   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:48.621461   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:48.621467   51251 cri.go:89] found id: ""
	I1018 17:42:48.621475   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:48.621531   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:48.625098   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:48.628679   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:48.628776   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:48.655455   51251 cri.go:89] found id: ""
	I1018 17:42:48.655477   51251 logs.go:282] 0 containers: []
	W1018 17:42:48.655486   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:48.655492   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:48.655574   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:48.686750   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:48.686773   51251 cri.go:89] found id: ""
	I1018 17:42:48.686781   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:48.686841   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:48.690841   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:48.690946   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:48.718158   51251 cri.go:89] found id: ""
	I1018 17:42:48.718186   51251 logs.go:282] 0 containers: []
	W1018 17:42:48.718194   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:48.718203   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:48.718213   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:48.823716   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:48.823756   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:48.901683   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:48.892565    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.893314    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.895024    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.895911    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.897573    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:48.892565    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.893314    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.895024    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.895911    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.897573    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:48.901743   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:48.901756   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:48.946710   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:48.946741   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:48.989214   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:48.989249   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:49.018928   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:49.018952   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:49.063728   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:49.063755   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:49.075796   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:49.075823   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:49.107128   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:49.107155   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:49.174004   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:49.174037   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:49.202814   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:49.202883   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:51.788673   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:51.804334   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:51.804402   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:51.832430   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:51.832451   51251 cri.go:89] found id: ""
	I1018 17:42:51.832459   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:51.832517   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:51.836251   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:51.836320   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:51.862897   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:51.862919   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:51.862924   51251 cri.go:89] found id: ""
	I1018 17:42:51.862931   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:51.862985   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:51.866673   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:51.870113   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:51.870200   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:51.895781   51251 cri.go:89] found id: ""
	I1018 17:42:51.895805   51251 logs.go:282] 0 containers: []
	W1018 17:42:51.895813   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:51.895820   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:51.895878   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:51.922494   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:51.922516   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:51.922521   51251 cri.go:89] found id: ""
	I1018 17:42:51.922528   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:51.922581   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:51.926209   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:51.929576   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:51.929673   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:51.956090   51251 cri.go:89] found id: ""
	I1018 17:42:51.956114   51251 logs.go:282] 0 containers: []
	W1018 17:42:51.956122   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:51.956129   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:51.956187   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:51.988490   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:51.988512   51251 cri.go:89] found id: ""
	I1018 17:42:51.988520   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:51.988574   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:51.992080   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:51.992159   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:52.021598   51251 cri.go:89] found id: ""
	I1018 17:42:52.021624   51251 logs.go:282] 0 containers: []
	W1018 17:42:52.021632   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:52.021642   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:52.021655   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:52.117617   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:52.117653   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:52.176829   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:52.177096   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:52.221507   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:52.221581   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:52.290597   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:52.290630   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:52.318933   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:52.318959   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:52.397646   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:52.397679   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:52.429557   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:52.429592   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:52.441410   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:52.441440   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:52.515237   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:52.505394    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.506908    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.507495    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.509107    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.509748    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:52.505394    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.506908    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.507495    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.509107    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.509748    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:52.515259   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:52.515272   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:52.546325   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:52.546352   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:55.073960   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:55.087265   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:55.087396   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:55.118731   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:55.118751   51251 cri.go:89] found id: ""
	I1018 17:42:55.118760   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:55.118827   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:55.122773   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:55.122841   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:55.160245   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:55.160267   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:55.160284   51251 cri.go:89] found id: ""
	I1018 17:42:55.160293   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:55.160353   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:55.164073   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:55.167693   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:55.167805   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:55.194629   51251 cri.go:89] found id: ""
	I1018 17:42:55.194653   51251 logs.go:282] 0 containers: []
	W1018 17:42:55.194661   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:55.194668   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:55.194741   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:55.222517   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:55.222579   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:55.222590   51251 cri.go:89] found id: ""
	I1018 17:42:55.222599   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:55.222655   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:55.226357   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:55.230025   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:55.230092   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:55.263792   51251 cri.go:89] found id: ""
	I1018 17:42:55.263816   51251 logs.go:282] 0 containers: []
	W1018 17:42:55.263824   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:55.263830   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:55.263889   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:55.291220   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:55.291241   51251 cri.go:89] found id: ""
	I1018 17:42:55.291249   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:55.291325   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:55.294934   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:55.295010   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:55.326586   51251 cri.go:89] found id: ""
	I1018 17:42:55.326609   51251 logs.go:282] 0 containers: []
	W1018 17:42:55.326617   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:55.326654   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:55.326671   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:55.401452   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:55.392275    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.393074    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.393930    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.395756    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.396145    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:55.392275    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.393074    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.393930    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.395756    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.396145    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:55.401476   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:55.401489   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:55.447692   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:55.447728   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:55.491129   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:55.491159   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:55.568889   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:55.568926   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:55.604397   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:55.604423   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:55.621149   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:55.621188   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:55.649355   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:55.649383   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:55.703784   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:55.703820   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:55.742564   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:55.742592   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:55.771921   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:55.771952   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:58.379973   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:58.390987   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:58.391064   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:58.420177   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:58.420206   51251 cri.go:89] found id: ""
	I1018 17:42:58.420214   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:58.420280   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:58.423975   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:58.424051   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:58.450210   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:58.450232   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:58.450237   51251 cri.go:89] found id: ""
	I1018 17:42:58.450244   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:58.450302   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:58.454890   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:58.458701   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:58.458770   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:58.483310   51251 cri.go:89] found id: ""
	I1018 17:42:58.483334   51251 logs.go:282] 0 containers: []
	W1018 17:42:58.483342   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:58.483348   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:58.483405   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:58.511930   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:58.511958   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:58.511963   51251 cri.go:89] found id: ""
	I1018 17:42:58.511970   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:58.512025   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:58.515745   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:58.519340   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:58.519409   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:58.546212   51251 cri.go:89] found id: ""
	I1018 17:42:58.546233   51251 logs.go:282] 0 containers: []
	W1018 17:42:58.546250   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:58.546257   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:58.546336   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:58.573991   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:58.574011   51251 cri.go:89] found id: ""
	I1018 17:42:58.574019   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:58.574073   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:58.577989   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:58.578068   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:58.609463   51251 cri.go:89] found id: ""
	I1018 17:42:58.609485   51251 logs.go:282] 0 containers: []
	W1018 17:42:58.609493   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:58.609520   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:58.609542   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:58.623900   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:58.623929   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:58.672129   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:58.672159   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:58.702420   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:58.702447   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:58.739914   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:58.739941   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:58.840389   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:58.840423   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:58.904498   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:58.896431    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.896966    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.898915    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.899719    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.901011    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:58.896431    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.896966    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.898915    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.899719    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.901011    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:58.904519   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:58.904534   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:58.933888   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:58.933915   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:58.967554   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:58.967628   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:59.028427   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:59.028504   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:59.054221   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:59.054249   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:01.639025   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:01.651715   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:01.651793   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:01.685240   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:01.685309   51251 cri.go:89] found id: ""
	I1018 17:43:01.685339   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:01.685423   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:01.690385   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:01.690468   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:01.719962   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:01.720035   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:01.720055   51251 cri.go:89] found id: ""
	I1018 17:43:01.720076   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:01.720148   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:01.723990   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:01.727538   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:01.727607   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:01.756529   51251 cri.go:89] found id: ""
	I1018 17:43:01.756562   51251 logs.go:282] 0 containers: []
	W1018 17:43:01.756571   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:01.756595   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:01.756676   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:01.789556   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:01.789581   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:01.789586   51251 cri.go:89] found id: ""
	I1018 17:43:01.789594   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:01.789659   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:01.794374   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:01.798060   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:01.798129   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:01.833059   51251 cri.go:89] found id: ""
	I1018 17:43:01.833089   51251 logs.go:282] 0 containers: []
	W1018 17:43:01.833097   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:01.833103   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:01.833172   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:01.860988   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:01.861009   51251 cri.go:89] found id: ""
	I1018 17:43:01.861017   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:01.861076   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:01.865838   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:01.865913   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:01.893009   51251 cri.go:89] found id: ""
	I1018 17:43:01.893035   51251 logs.go:282] 0 containers: []
	W1018 17:43:01.893043   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:01.893052   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:01.893064   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:01.997703   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:01.997739   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:02.060549   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:02.060581   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:02.094970   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:02.095001   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:02.161721   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:02.161757   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:02.209000   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:02.209029   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:02.239896   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:02.239920   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:02.275701   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:02.275727   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:02.288373   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:02.288400   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:02.360448   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:02.351719    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.352549    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.354058    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.354626    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.356320    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:02.351719    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.352549    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.354058    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.354626    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.356320    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:02.360469   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:02.360481   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:02.390739   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:02.390769   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:04.978257   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:04.988916   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:04.989037   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:05.019550   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:05.019573   51251 cri.go:89] found id: ""
	I1018 17:43:05.019582   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:05.019646   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:05.023992   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:05.024069   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:05.050514   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:05.050533   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:05.050538   51251 cri.go:89] found id: ""
	I1018 17:43:05.050546   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:05.050601   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:05.054386   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:05.058083   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:05.058155   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:05.093052   51251 cri.go:89] found id: ""
	I1018 17:43:05.093079   51251 logs.go:282] 0 containers: []
	W1018 17:43:05.093088   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:05.093096   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:05.093200   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:05.124045   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:05.124115   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:05.124134   51251 cri.go:89] found id: ""
	I1018 17:43:05.124156   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:05.124238   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:05.129085   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:05.134571   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:05.134649   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:05.162401   51251 cri.go:89] found id: ""
	I1018 17:43:05.162423   51251 logs.go:282] 0 containers: []
	W1018 17:43:05.162432   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:05.162439   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:05.162505   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:05.191429   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:05.191451   51251 cri.go:89] found id: ""
	I1018 17:43:05.191459   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:05.191513   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:05.195222   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:05.195291   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:05.233765   51251 cri.go:89] found id: ""
	I1018 17:43:05.233789   51251 logs.go:282] 0 containers: []
	W1018 17:43:05.233797   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:05.233813   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:05.233824   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:05.314015   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:05.314049   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:05.343775   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:05.343799   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:05.447678   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:05.447715   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:05.461224   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:05.461251   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:05.531644   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:05.521503    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.523802    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.525607    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.526297    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.527849    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:05.521503    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.523802    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.525607    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.526297    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.527849    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:05.531668   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:05.531681   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:05.589572   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:05.589609   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:05.620844   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:05.620871   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:05.649833   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:05.649861   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:05.702301   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:05.702335   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:05.746579   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:05.746612   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:08.279428   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:08.290505   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:08.290572   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:08.323196   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:08.323217   51251 cri.go:89] found id: ""
	I1018 17:43:08.323225   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:08.323287   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:08.326970   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:08.327042   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:08.353811   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:08.353833   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:08.353837   51251 cri.go:89] found id: ""
	I1018 17:43:08.353845   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:08.353903   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:08.357796   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:08.361798   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:08.361874   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:08.390063   51251 cri.go:89] found id: ""
	I1018 17:43:08.390086   51251 logs.go:282] 0 containers: []
	W1018 17:43:08.390094   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:08.390104   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:08.390164   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:08.417117   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:08.417137   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:08.417142   51251 cri.go:89] found id: ""
	I1018 17:43:08.417153   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:08.417209   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:08.421291   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:08.424803   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:08.424875   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:08.450383   51251 cri.go:89] found id: ""
	I1018 17:43:08.450405   51251 logs.go:282] 0 containers: []
	W1018 17:43:08.450412   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:08.450419   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:08.450517   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:08.475291   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:08.475312   51251 cri.go:89] found id: ""
	I1018 17:43:08.475321   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:08.475376   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:08.479043   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:08.479113   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:08.509786   51251 cri.go:89] found id: ""
	I1018 17:43:08.509809   51251 logs.go:282] 0 containers: []
	W1018 17:43:08.509817   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:08.509826   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:08.509838   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:08.605996   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:08.606031   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:08.622166   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:08.622201   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:08.702891   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:08.692116    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.693186    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.694251    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.694895    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.697165    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:08.692116    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.693186    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.694251    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.694895    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.697165    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:08.702955   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:08.702973   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:08.732447   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:08.732474   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:08.759641   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:08.759667   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:08.790348   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:08.790378   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:08.821468   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:08.821493   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:08.873070   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:08.873109   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:08.906030   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:08.906070   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:08.964907   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:08.964966   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:11.547663   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:11.559867   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:11.559932   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:11.595124   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:11.595143   51251 cri.go:89] found id: ""
	I1018 17:43:11.595151   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:11.595209   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.599553   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:11.599619   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:11.639738   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:11.639820   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:11.639844   51251 cri.go:89] found id: ""
	I1018 17:43:11.639865   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:11.639950   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.646442   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.651648   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:11.651787   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:11.695203   51251 cri.go:89] found id: ""
	I1018 17:43:11.695286   51251 logs.go:282] 0 containers: []
	W1018 17:43:11.695316   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:11.695337   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:11.695418   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:11.744347   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:11.744416   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:11.744441   51251 cri.go:89] found id: ""
	I1018 17:43:11.744463   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:11.744558   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.751191   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.755958   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:11.756105   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:11.791266   51251 cri.go:89] found id: ""
	I1018 17:43:11.791331   51251 logs.go:282] 0 containers: []
	W1018 17:43:11.791353   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:11.791383   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:11.791474   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:11.834876   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:11.834963   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:11.834989   51251 cri.go:89] found id: ""
	I1018 17:43:11.835011   51251 logs.go:282] 2 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:11.835086   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.841198   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.846580   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:11.846715   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:11.897749   51251 cri.go:89] found id: ""
	I1018 17:43:11.897822   51251 logs.go:282] 0 containers: []
	W1018 17:43:11.897846   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:11.897881   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:11.897928   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:11.943452   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:11.943536   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:12.005227   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:12.005338   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:12.062557   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:12.062624   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:12.182021   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:12.182095   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:12.197845   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:12.197920   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:12.260741   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:12.260817   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:12.335387   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:12.335466   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:12.369750   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:12.369775   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:12.449888   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:12.449923   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:12.545478   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:12.535379    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.536014    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.539746    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.540245    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.541774    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:12.535379    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.536014    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.539746    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.540245    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.541774    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:12.545496   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:12.545509   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:12.577372   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:12.577397   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:15.116790   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:15.132080   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:15.132161   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:15.159487   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:15.159506   51251 cri.go:89] found id: ""
	I1018 17:43:15.159515   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:15.159567   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.163178   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:15.163272   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:15.191277   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:15.191296   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:15.191300   51251 cri.go:89] found id: ""
	I1018 17:43:15.191315   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:15.191372   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.195019   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.198423   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:15.198491   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:15.225886   51251 cri.go:89] found id: ""
	I1018 17:43:15.225910   51251 logs.go:282] 0 containers: []
	W1018 17:43:15.225919   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:15.225925   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:15.225986   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:15.251392   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:15.251414   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:15.251419   51251 cri.go:89] found id: ""
	I1018 17:43:15.251426   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:15.251480   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.255201   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.258787   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:15.258880   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:15.285767   51251 cri.go:89] found id: ""
	I1018 17:43:15.285831   51251 logs.go:282] 0 containers: []
	W1018 17:43:15.285854   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:15.285878   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:15.285951   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:15.316160   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:15.316219   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:15.316239   51251 cri.go:89] found id: ""
	I1018 17:43:15.316261   51251 logs.go:282] 2 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:15.316333   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.320128   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.323596   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:15.323665   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:15.349496   51251 cri.go:89] found id: ""
	I1018 17:43:15.349522   51251 logs.go:282] 0 containers: []
	W1018 17:43:15.349531   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:15.349541   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:15.349569   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:15.420881   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:15.420916   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:15.451259   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:15.451285   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:15.548698   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:15.548740   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:15.561517   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:15.561546   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:15.608036   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:15.608071   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:15.641405   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:15.641431   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:15.668198   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:15.668226   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:15.694563   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:15.694591   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:15.770902   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:15.770936   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:15.836895   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:15.828987    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.829667    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.831325    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.831865    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.833343    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:15.828987    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.829667    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.831325    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.831865    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.833343    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:15.836919   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:15.836931   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:15.865888   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:15.865916   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:18.408468   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:18.419326   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:18.419393   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:18.443753   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:18.443775   51251 cri.go:89] found id: ""
	I1018 17:43:18.443783   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:18.443839   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.447404   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:18.447481   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:18.473566   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:18.473627   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:18.473639   51251 cri.go:89] found id: ""
	I1018 17:43:18.473647   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:18.473702   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.477524   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.481293   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:18.481397   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:18.507887   51251 cri.go:89] found id: ""
	I1018 17:43:18.507965   51251 logs.go:282] 0 containers: []
	W1018 17:43:18.507991   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:18.508011   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:18.508082   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:18.534789   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:18.534809   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:18.534814   51251 cri.go:89] found id: ""
	I1018 17:43:18.534821   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:18.534876   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.538531   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.542059   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:18.542133   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:18.567277   51251 cri.go:89] found id: ""
	I1018 17:43:18.567299   51251 logs.go:282] 0 containers: []
	W1018 17:43:18.567307   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:18.567316   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:18.567375   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:18.593882   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:18.593902   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:18.593907   51251 cri.go:89] found id: ""
	I1018 17:43:18.593914   51251 logs.go:282] 2 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:18.593971   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.598057   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.601482   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:18.601548   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:18.626724   51251 cri.go:89] found id: ""
	I1018 17:43:18.626748   51251 logs.go:282] 0 containers: []
	W1018 17:43:18.626756   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:18.626766   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:18.626777   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:18.720186   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:18.720220   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:18.732342   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:18.732372   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:18.777781   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:18.777813   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:18.814519   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:18.814548   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:18.842102   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:18.842129   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:18.870191   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:18.870215   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:18.940137   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:18.931877    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.932545    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.934242    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.934870    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.936368    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:18.931877    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.932545    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.934242    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.934870    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.936368    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:18.940159   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:18.940171   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:18.972118   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:18.972143   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:19.028698   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:19.028731   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:19.053561   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:19.053588   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:19.134177   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:19.134210   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:21.666074   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:21.677905   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:21.677982   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:21.710449   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:21.710470   51251 cri.go:89] found id: ""
	I1018 17:43:21.710479   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:21.710534   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.714253   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:21.714326   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:21.741478   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:21.741547   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:21.741558   51251 cri.go:89] found id: ""
	I1018 17:43:21.741566   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:21.741627   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.745535   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.750022   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:21.750140   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:21.780635   51251 cri.go:89] found id: ""
	I1018 17:43:21.780708   51251 logs.go:282] 0 containers: []
	W1018 17:43:21.780731   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:21.780778   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:21.780856   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:21.808496   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:21.808514   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:21.808518   51251 cri.go:89] found id: ""
	I1018 17:43:21.808525   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:21.808582   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.812401   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.815810   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:21.815876   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:21.845624   51251 cri.go:89] found id: ""
	I1018 17:43:21.845657   51251 logs.go:282] 0 containers: []
	W1018 17:43:21.845665   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:21.845672   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:21.845731   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:21.871314   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:21.871332   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:21.871336   51251 cri.go:89] found id: ""
	I1018 17:43:21.871343   51251 logs.go:282] 2 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:21.871399   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.875259   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.878771   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:21.878839   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:21.913289   51251 cri.go:89] found id: ""
	I1018 17:43:21.913312   51251 logs.go:282] 0 containers: []
	W1018 17:43:21.913321   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:21.913330   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:21.913341   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:21.990540   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:21.990577   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:22.023215   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:22.023243   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:22.053561   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:22.053588   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:22.081164   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:22.081191   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:22.145177   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:22.145212   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:22.184829   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:22.184859   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:22.228057   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:22.228081   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:22.316019   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:22.316053   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:22.347876   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:22.347901   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:22.450507   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:22.450541   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:22.462429   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:22.462456   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:22.536495   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:22.527657    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.528744    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.530446    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.531068    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.532737    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:22.527657    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.528744    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.530446    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.531068    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.532737    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:25.036723   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:25.048068   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:25.048137   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:25.074496   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:25.074517   51251 cri.go:89] found id: ""
	I1018 17:43:25.074525   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:25.074581   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:25.078699   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:25.078775   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:25.106068   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:25.106088   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:25.106092   51251 cri.go:89] found id: ""
	I1018 17:43:25.106099   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:25.106154   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:25.109911   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:25.116299   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:25.116392   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:25.152465   51251 cri.go:89] found id: ""
	I1018 17:43:25.152545   51251 logs.go:282] 0 containers: []
	W1018 17:43:25.152568   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:25.152587   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:25.152679   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:25.179667   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:25.179690   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:25.179695   51251 cri.go:89] found id: ""
	I1018 17:43:25.179703   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:25.179762   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:25.183571   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:25.187316   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:25.187431   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:25.216762   51251 cri.go:89] found id: ""
	I1018 17:43:25.216796   51251 logs.go:282] 0 containers: []
	W1018 17:43:25.216805   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:25.216812   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:25.216871   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:25.244556   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:25.244578   51251 cri.go:89] found id: ""
	I1018 17:43:25.244587   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:25.244642   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:25.248407   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:25.248485   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:25.274854   51251 cri.go:89] found id: ""
	I1018 17:43:25.274879   51251 logs.go:282] 0 containers: []
	W1018 17:43:25.274888   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:25.274897   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:25.274908   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:25.331118   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:25.331153   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:25.411446   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:25.411478   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:25.462440   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:25.462467   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:25.525297   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:25.525373   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:25.555066   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:25.555092   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:25.581528   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:25.581558   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:25.682424   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:25.682461   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:25.695456   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:25.695486   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:25.766142   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:25.757215    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.757999    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.759442    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.759856    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.761265    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:25.757215    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.757999    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.759442    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.759856    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.761265    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:25.766162   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:25.766174   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:25.795404   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:25.795433   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:28.337726   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:28.348255   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:28.348338   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:28.382821   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:28.382841   51251 cri.go:89] found id: ""
	I1018 17:43:28.382849   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:28.382903   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:28.386571   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:28.386653   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:28.418956   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:28.418976   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:28.418981   51251 cri.go:89] found id: ""
	I1018 17:43:28.418988   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:28.419041   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:28.422637   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:28.426047   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:28.426115   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:28.450805   51251 cri.go:89] found id: ""
	I1018 17:43:28.450826   51251 logs.go:282] 0 containers: []
	W1018 17:43:28.450834   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:28.450841   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:28.450897   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:28.476049   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:28.476069   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:28.476075   51251 cri.go:89] found id: ""
	I1018 17:43:28.476083   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:28.476137   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:28.479674   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:28.483214   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:28.483280   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:28.509438   51251 cri.go:89] found id: ""
	I1018 17:43:28.509460   51251 logs.go:282] 0 containers: []
	W1018 17:43:28.509468   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:28.509475   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:28.509531   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:28.536762   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:28.536783   51251 cri.go:89] found id: ""
	I1018 17:43:28.536791   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:28.536846   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:28.540786   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:28.540849   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:28.566044   51251 cri.go:89] found id: ""
	I1018 17:43:28.566066   51251 logs.go:282] 0 containers: []
	W1018 17:43:28.566076   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:28.566085   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:28.566126   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:28.668507   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:28.668548   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:28.696140   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:28.696166   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:28.742992   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:28.743028   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:28.773720   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:28.773749   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:28.800871   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:28.800897   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:28.812516   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:28.812544   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:28.881394   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:28.872850    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.873551    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.875119    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.875694    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.877437    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:28.872850    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.873551    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.875119    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.875694    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.877437    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:28.881466   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:28.881493   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:28.920319   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:28.920351   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:29.001463   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:29.001501   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:29.080673   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:29.080705   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:31.615872   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:31.627104   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:31.627173   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:31.652790   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:31.652812   51251 cri.go:89] found id: ""
	I1018 17:43:31.652820   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:31.652880   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:31.656835   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:31.656905   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:31.684663   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:31.684685   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:31.684690   51251 cri.go:89] found id: ""
	I1018 17:43:31.684698   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:31.684752   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:31.688556   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:31.692271   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:31.692343   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:31.720037   51251 cri.go:89] found id: ""
	I1018 17:43:31.720059   51251 logs.go:282] 0 containers: []
	W1018 17:43:31.720067   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:31.720074   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:31.720130   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:31.745058   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:31.745078   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:31.745083   51251 cri.go:89] found id: ""
	I1018 17:43:31.745090   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:31.745144   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:31.748688   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:31.752002   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:31.752068   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:31.780253   51251 cri.go:89] found id: ""
	I1018 17:43:31.780275   51251 logs.go:282] 0 containers: []
	W1018 17:43:31.780283   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:31.780289   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:31.780346   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:31.806333   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:31.806358   51251 cri.go:89] found id: ""
	I1018 17:43:31.806365   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:31.806429   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:31.810331   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:31.810403   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:31.836140   51251 cri.go:89] found id: ""
	I1018 17:43:31.836205   51251 logs.go:282] 0 containers: []
	W1018 17:43:31.836227   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:31.836250   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:31.836292   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:31.874437   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:31.874512   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:31.901146   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:31.901171   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:31.998418   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:31.998452   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:32.014569   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:32.014606   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:32.063231   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:32.063266   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:32.130021   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:32.130061   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:32.160724   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:32.160761   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:32.239135   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:32.239173   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:32.285504   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:32.285531   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:32.361004   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:32.352916    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.353683    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.355270    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.355600    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.357143    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:32.352916    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.353683    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.355270    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.355600    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.357143    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:32.361029   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:32.361042   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:34.888854   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:34.901112   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:34.901187   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:34.929962   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:34.929982   51251 cri.go:89] found id: ""
	I1018 17:43:34.929990   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:34.930044   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:34.933771   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:34.933840   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:34.974958   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:34.974990   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:34.974994   51251 cri.go:89] found id: ""
	I1018 17:43:34.975002   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:34.975063   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:34.979007   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:34.982588   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:34.982669   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:35.025772   51251 cri.go:89] found id: ""
	I1018 17:43:35.025794   51251 logs.go:282] 0 containers: []
	W1018 17:43:35.025802   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:35.025808   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:35.025867   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:35.054583   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:35.054606   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:35.054611   51251 cri.go:89] found id: ""
	I1018 17:43:35.054619   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:35.054683   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:35.058624   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:35.062166   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:35.062249   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:35.099459   51251 cri.go:89] found id: ""
	I1018 17:43:35.099482   51251 logs.go:282] 0 containers: []
	W1018 17:43:35.099490   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:35.099497   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:35.099553   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:35.135905   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:35.135927   51251 cri.go:89] found id: ""
	I1018 17:43:35.135936   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:35.135993   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:35.139558   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:35.139675   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:35.167854   51251 cri.go:89] found id: ""
	I1018 17:43:35.167877   51251 logs.go:282] 0 containers: []
	W1018 17:43:35.167886   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:35.167895   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:35.167906   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:35.268911   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:35.268953   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:35.351239   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:35.342070    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.342707    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.344447    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.345185    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.346039    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:35.342070    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.342707    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.344447    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.345185    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.346039    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:35.351259   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:35.351271   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:35.414894   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:35.414928   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:35.449804   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:35.449834   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:35.506409   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:35.506445   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:35.595870   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:35.595911   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:35.608335   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:35.608364   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:35.639546   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:35.639574   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:35.667961   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:35.667987   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:35.698739   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:35.698763   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:38.237278   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:38.248092   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:38.248161   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:38.274867   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:38.274888   51251 cri.go:89] found id: ""
	I1018 17:43:38.274896   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:38.274965   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:38.278707   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:38.278774   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:38.304232   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:38.304252   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:38.304256   51251 cri.go:89] found id: ""
	I1018 17:43:38.304264   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:38.304317   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:38.309670   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:38.313425   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:38.313497   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:38.344118   51251 cri.go:89] found id: ""
	I1018 17:43:38.344140   51251 logs.go:282] 0 containers: []
	W1018 17:43:38.344149   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:38.344156   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:38.344214   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:38.376271   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:38.376294   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:38.376298   51251 cri.go:89] found id: ""
	I1018 17:43:38.376316   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:38.376373   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:38.380454   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:38.384255   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:38.384326   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:38.409931   51251 cri.go:89] found id: ""
	I1018 17:43:38.409955   51251 logs.go:282] 0 containers: []
	W1018 17:43:38.409963   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:38.409977   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:38.410038   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:38.436568   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:38.436591   51251 cri.go:89] found id: ""
	I1018 17:43:38.436600   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:38.436672   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:38.440383   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:38.440477   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:38.468084   51251 cri.go:89] found id: ""
	I1018 17:43:38.468161   51251 logs.go:282] 0 containers: []
	W1018 17:43:38.468184   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:38.468206   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:38.468228   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:38.565168   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:38.565204   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:38.577269   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:38.577297   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:38.646729   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:38.638445    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.639186    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.640793    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.641395    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.643175    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:38.638445    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.639186    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.640793    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.641395    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.643175    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:38.646754   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:38.646768   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:38.673481   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:38.673507   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:38.719835   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:38.719871   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:38.752322   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:38.752362   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:38.783579   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:38.783606   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:38.820293   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:38.820322   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:38.878730   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:38.878761   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:38.907670   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:38.907740   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:41.489854   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:41.500771   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:41.500872   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:41.526674   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:41.526696   51251 cri.go:89] found id: ""
	I1018 17:43:41.526706   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:41.526770   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:41.531078   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:41.531191   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:41.562796   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:41.562823   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:41.562829   51251 cri.go:89] found id: ""
	I1018 17:43:41.562837   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:41.562959   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:41.566913   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:41.570998   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:41.571118   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:41.597622   51251 cri.go:89] found id: ""
	I1018 17:43:41.597647   51251 logs.go:282] 0 containers: []
	W1018 17:43:41.597655   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:41.597662   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:41.597720   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:41.627549   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:41.627570   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:41.627575   51251 cri.go:89] found id: ""
	I1018 17:43:41.627583   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:41.627642   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:41.631299   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:41.635563   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:41.635662   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:41.662146   51251 cri.go:89] found id: ""
	I1018 17:43:41.662170   51251 logs.go:282] 0 containers: []
	W1018 17:43:41.662179   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:41.662185   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:41.662244   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:41.693012   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:41.693038   51251 cri.go:89] found id: ""
	I1018 17:43:41.693047   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:41.693132   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:41.697195   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:41.697265   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:41.729826   51251 cri.go:89] found id: ""
	I1018 17:43:41.729850   51251 logs.go:282] 0 containers: []
	W1018 17:43:41.729859   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:41.729869   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:41.729880   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:41.828078   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:41.828110   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:41.901435   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:41.892987    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.893726    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.895255    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.895832    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.897510    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:41.892987    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.893726    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.895255    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.895832    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.897510    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:41.901459   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:41.901472   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:41.929914   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:41.929989   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:41.987757   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:41.987802   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:42.039791   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:42.039830   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:42.075456   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:42.075487   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:42.149099   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:42.149132   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:42.164617   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:42.164650   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:42.257289   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:42.257327   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:42.287081   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:42.287112   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:44.874333   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:44.884870   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:44.884968   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:44.912153   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:44.912175   51251 cri.go:89] found id: ""
	I1018 17:43:44.912183   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:44.912237   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:44.915849   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:44.915919   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:44.942584   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:44.942604   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:44.942609   51251 cri.go:89] found id: ""
	I1018 17:43:44.942616   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:44.942668   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:44.946463   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:44.949841   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:44.949907   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:44.986621   51251 cri.go:89] found id: ""
	I1018 17:43:44.986646   51251 logs.go:282] 0 containers: []
	W1018 17:43:44.986654   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:44.986661   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:44.986718   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:45.029811   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:45.029830   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:45.029835   51251 cri.go:89] found id: ""
	I1018 17:43:45.029843   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:45.029908   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:45.035692   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:45.040000   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:45.040078   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:45.098723   51251 cri.go:89] found id: ""
	I1018 17:43:45.098751   51251 logs.go:282] 0 containers: []
	W1018 17:43:45.098760   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:45.098770   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:45.098843   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:45.162198   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:45.162228   51251 cri.go:89] found id: ""
	I1018 17:43:45.162238   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:45.162307   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:45.167619   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:45.167700   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:45.211984   51251 cri.go:89] found id: ""
	I1018 17:43:45.212008   51251 logs.go:282] 0 containers: []
	W1018 17:43:45.212018   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:45.212028   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:45.212041   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:45.226821   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:45.226851   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:45.337585   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:45.321955    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.322823    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.324086    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.327115    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.329027    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:45.321955    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.322823    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.324086    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.327115    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.329027    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:45.337625   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:45.337641   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:45.377460   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:45.377491   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:45.429187   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:45.429222   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:45.457994   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:45.458022   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:45.540761   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:45.540797   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:45.573633   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:45.573662   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:45.672580   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:45.672617   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:45.706688   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:45.706720   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:45.783083   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:45.783120   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:48.314260   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:48.324891   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:48.324985   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:48.357904   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:48.357927   51251 cri.go:89] found id: ""
	I1018 17:43:48.357940   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:48.357997   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:48.362392   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:48.362474   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:48.397905   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:48.397927   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:48.397932   51251 cri.go:89] found id: ""
	I1018 17:43:48.397940   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:48.397993   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:48.401719   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:48.404922   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:48.405019   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:48.431573   51251 cri.go:89] found id: ""
	I1018 17:43:48.431598   51251 logs.go:282] 0 containers: []
	W1018 17:43:48.431606   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:48.431613   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:48.431673   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:48.458728   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:48.458755   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:48.458760   51251 cri.go:89] found id: ""
	I1018 17:43:48.458767   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:48.458824   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:48.462488   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:48.465841   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:48.465909   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:48.491719   51251 cri.go:89] found id: ""
	I1018 17:43:48.491741   51251 logs.go:282] 0 containers: []
	W1018 17:43:48.491749   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:48.491755   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:48.491815   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:48.522124   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:48.522189   51251 cri.go:89] found id: ""
	I1018 17:43:48.522211   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:48.522292   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:48.526320   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:48.526407   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:48.552413   51251 cri.go:89] found id: ""
	I1018 17:43:48.552436   51251 logs.go:282] 0 containers: []
	W1018 17:43:48.552445   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:48.552454   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:48.552471   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:48.647083   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:48.647114   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:48.660735   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:48.660768   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:48.690812   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:48.690837   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:48.721178   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:48.721208   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:48.748549   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:48.748617   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:48.823598   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:48.823637   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:48.855654   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:48.855680   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:48.931642   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:48.922606    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.923296    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.925195    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.925885    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.928154    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:48.922606    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.923296    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.925195    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.925885    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.928154    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:48.931664   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:48.931678   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:48.984964   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:48.985003   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:49.022359   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:49.022391   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:51.581690   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:51.592535   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:51.592618   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:51.621442   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:51.621470   51251 cri.go:89] found id: ""
	I1018 17:43:51.621479   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:51.621535   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:51.625435   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:51.625513   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:51.653328   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:51.653354   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:51.653360   51251 cri.go:89] found id: ""
	I1018 17:43:51.653367   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:51.653425   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:51.657372   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:51.660911   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:51.661083   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:51.687435   51251 cri.go:89] found id: ""
	I1018 17:43:51.687456   51251 logs.go:282] 0 containers: []
	W1018 17:43:51.687465   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:51.687472   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:51.687533   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:51.716167   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:51.716189   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:51.716194   51251 cri.go:89] found id: ""
	I1018 17:43:51.716201   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:51.716256   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:51.719950   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:51.723494   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:51.723575   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:51.752147   51251 cri.go:89] found id: ""
	I1018 17:43:51.752171   51251 logs.go:282] 0 containers: []
	W1018 17:43:51.752180   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:51.752186   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:51.752245   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:51.779213   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:51.779236   51251 cri.go:89] found id: ""
	I1018 17:43:51.779244   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:51.779305   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:51.782913   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:51.782986   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:51.810202   51251 cri.go:89] found id: ""
	I1018 17:43:51.810228   51251 logs.go:282] 0 containers: []
	W1018 17:43:51.810236   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:51.810246   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:51.810258   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:51.824029   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:51.824058   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:51.894919   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:51.886698    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.887712    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.889389    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.889843    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.891356    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:51.886698    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.887712    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.889389    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.889843    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.891356    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:51.894983   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:51.895002   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:51.955232   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:51.955263   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:51.990622   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:51.990651   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:52.020376   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:52.020405   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:52.066713   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:52.066740   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:52.172061   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:52.172103   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:52.214913   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:52.214938   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:52.251763   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:52.251854   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:52.311510   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:52.311541   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:54.894390   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:54.907290   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:54.907366   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:54.940172   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:54.940196   51251 cri.go:89] found id: ""
	I1018 17:43:54.940204   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:54.940260   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:54.943992   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:54.944086   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:54.978188   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:54.978210   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:54.978214   51251 cri.go:89] found id: ""
	I1018 17:43:54.978222   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:54.978282   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:54.982194   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:54.986022   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:54.986121   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:55.029209   51251 cri.go:89] found id: ""
	I1018 17:43:55.029239   51251 logs.go:282] 0 containers: []
	W1018 17:43:55.029248   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:55.029256   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:55.029318   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:55.057246   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:55.057271   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:55.057276   51251 cri.go:89] found id: ""
	I1018 17:43:55.057283   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:55.057336   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:55.061051   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:55.064367   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:55.064436   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:55.095243   51251 cri.go:89] found id: ""
	I1018 17:43:55.095307   51251 logs.go:282] 0 containers: []
	W1018 17:43:55.095329   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:55.095341   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:55.095399   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:55.122785   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:55.122804   51251 cri.go:89] found id: ""
	I1018 17:43:55.122813   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:55.122876   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:55.132639   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:55.132738   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:55.162942   51251 cri.go:89] found id: ""
	I1018 17:43:55.162977   51251 logs.go:282] 0 containers: []
	W1018 17:43:55.162986   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:55.163011   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:55.163032   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:55.228280   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:55.228312   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:55.259473   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:55.259500   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:55.292185   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:55.292220   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:55.341717   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:55.341749   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:55.375698   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:55.375727   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:55.402916   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:55.402942   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:55.490846   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:55.490886   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:55.587437   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:55.587478   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:55.600254   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:55.600280   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:55.666266   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:55.657772    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.658733    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.660294    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.660924    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.662498    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:55.657772    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.658733    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.660294    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.660924    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.662498    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:55.666289   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:55.666311   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:58.191608   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:58.207197   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:58.207266   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:58.241572   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:58.241593   51251 cri.go:89] found id: ""
	I1018 17:43:58.241602   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:58.241656   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:58.245301   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:58.245380   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:58.275809   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:58.275830   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:58.275835   51251 cri.go:89] found id: ""
	I1018 17:43:58.275842   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:58.275898   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:58.279806   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:58.283389   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:58.283459   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:58.312440   51251 cri.go:89] found id: ""
	I1018 17:43:58.312464   51251 logs.go:282] 0 containers: []
	W1018 17:43:58.312472   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:58.312479   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:58.312535   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:58.341315   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:58.341341   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:58.341346   51251 cri.go:89] found id: ""
	I1018 17:43:58.341354   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:58.341418   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:58.345155   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:58.348837   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:58.348906   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:58.375741   51251 cri.go:89] found id: ""
	I1018 17:43:58.375811   51251 logs.go:282] 0 containers: []
	W1018 17:43:58.375843   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:58.375861   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:58.375951   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:58.402340   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:58.402361   51251 cri.go:89] found id: ""
	I1018 17:43:58.402369   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:58.402424   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:58.406046   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:58.406112   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:58.430628   51251 cri.go:89] found id: ""
	I1018 17:43:58.430701   51251 logs.go:282] 0 containers: []
	W1018 17:43:58.430717   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:58.430727   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:58.430737   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:58.524428   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:58.524462   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:58.581885   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:58.581916   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:58.611949   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:58.611979   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:58.693414   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:58.693450   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:58.705470   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:58.705496   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:58.771817   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:58.763821    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.764175    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.765665    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.766083    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.767558    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:58.763821    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.764175    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.765665    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.766083    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.767558    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:58.771836   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:58.771847   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:58.798225   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:58.798252   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:58.848969   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:58.849000   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:58.887826   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:58.887856   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:58.914297   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:58.914322   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:01.448548   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:01.459433   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:01.459507   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:01.490534   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:01.490566   51251 cri.go:89] found id: ""
	I1018 17:44:01.490575   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:01.490649   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:01.494451   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:01.494547   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:01.522081   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:01.522104   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:01.522109   51251 cri.go:89] found id: ""
	I1018 17:44:01.522117   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:01.522175   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:01.526069   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:01.529977   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:01.530054   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:01.557411   51251 cri.go:89] found id: ""
	I1018 17:44:01.557433   51251 logs.go:282] 0 containers: []
	W1018 17:44:01.557442   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:01.557448   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:01.557508   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:01.585118   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:01.585142   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:01.585147   51251 cri.go:89] found id: ""
	I1018 17:44:01.585155   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:01.585218   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:01.588900   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:01.592735   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:01.592820   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:01.621026   51251 cri.go:89] found id: ""
	I1018 17:44:01.621098   51251 logs.go:282] 0 containers: []
	W1018 17:44:01.621121   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:01.621140   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:01.621227   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:01.649479   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:01.649503   51251 cri.go:89] found id: ""
	I1018 17:44:01.649512   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:01.649576   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:01.653509   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:01.653601   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:01.680380   51251 cri.go:89] found id: ""
	I1018 17:44:01.680405   51251 logs.go:282] 0 containers: []
	W1018 17:44:01.680413   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:01.680445   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:01.680470   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:01.719413   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:01.719445   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:01.778065   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:01.778113   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:01.863062   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:01.863098   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:01.933290   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:01.925181    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.926041    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.926645    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.928011    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.928516    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:01.925181    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.926041    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.926645    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.928011    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.928516    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:01.933312   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:01.933325   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:01.994141   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:01.994175   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:02.027406   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:02.027433   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:02.058305   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:02.058374   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:02.089161   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:02.089238   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:02.197504   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:02.197547   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:02.220679   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:02.220704   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:04.749655   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:04.761329   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:04.761399   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:04.791310   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:04.791330   51251 cri.go:89] found id: ""
	I1018 17:44:04.791338   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:04.791391   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:04.795236   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:04.795315   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:04.826977   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:04.826999   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:04.827004   51251 cri.go:89] found id: ""
	I1018 17:44:04.827012   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:04.827071   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:04.831056   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:04.834547   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:04.834619   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:04.861994   51251 cri.go:89] found id: ""
	I1018 17:44:04.862019   51251 logs.go:282] 0 containers: []
	W1018 17:44:04.862028   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:04.862036   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:04.862093   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:04.891547   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:04.891568   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:04.891573   51251 cri.go:89] found id: ""
	I1018 17:44:04.891580   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:04.891664   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:04.895286   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:04.898803   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:04.898879   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:04.925892   51251 cri.go:89] found id: ""
	I1018 17:44:04.925917   51251 logs.go:282] 0 containers: []
	W1018 17:44:04.925925   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:04.925932   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:04.925992   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:04.950898   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:04.950920   51251 cri.go:89] found id: ""
	I1018 17:44:04.950937   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:04.950992   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:04.954458   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:04.954524   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:04.985795   51251 cri.go:89] found id: ""
	I1018 17:44:04.985818   51251 logs.go:282] 0 containers: []
	W1018 17:44:04.985826   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:04.985845   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:04.985857   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:05.039846   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:05.039880   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:05.074700   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:05.074733   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:05.123696   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:05.123722   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:05.162141   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:05.162168   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:05.233397   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:05.233431   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:05.260751   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:05.260780   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:05.342549   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:05.342585   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:05.374809   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:05.374833   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:05.480225   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:05.480260   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:05.492409   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:05.492433   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:05.563815   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:05.554079    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.554775    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.556564    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.557183    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.558926    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:05.554079    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.554775    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.556564    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.557183    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.558926    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:08.065115   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:08.076338   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:08.076434   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:08.104997   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:08.105072   51251 cri.go:89] found id: ""
	I1018 17:44:08.105096   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:08.105171   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:08.109342   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:08.109473   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:08.142036   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:08.142059   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:08.142063   51251 cri.go:89] found id: ""
	I1018 17:44:08.142071   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:08.142127   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:08.145811   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:08.149071   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:08.149138   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:08.178455   51251 cri.go:89] found id: ""
	I1018 17:44:08.178476   51251 logs.go:282] 0 containers: []
	W1018 17:44:08.178485   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:08.178491   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:08.178547   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:08.211837   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:08.211858   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:08.211862   51251 cri.go:89] found id: ""
	I1018 17:44:08.211871   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:08.211926   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:08.215306   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:08.218688   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:08.218753   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:08.245955   51251 cri.go:89] found id: ""
	I1018 17:44:08.245978   51251 logs.go:282] 0 containers: []
	W1018 17:44:08.245987   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:08.245994   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:08.246072   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:08.277970   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:08.277992   51251 cri.go:89] found id: ""
	I1018 17:44:08.278011   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:08.278083   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:08.281866   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:08.281956   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:08.314813   51251 cri.go:89] found id: ""
	I1018 17:44:08.314835   51251 logs.go:282] 0 containers: []
	W1018 17:44:08.314844   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:08.314853   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:08.314888   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:08.326805   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:08.326836   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:08.360439   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:08.360467   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:08.388919   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:08.388973   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:08.486321   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:08.486351   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:08.552337   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:08.544684    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.545314    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.546893    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.547374    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.548846    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:08.544684    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.545314    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.546893    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.547374    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.548846    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:08.552356   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:08.552369   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:08.577416   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:08.577441   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:08.629938   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:08.629973   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:08.689554   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:08.689585   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:08.719107   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:08.719132   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:08.799512   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:08.799588   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:11.341509   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:11.352018   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:11.352091   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:11.378915   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:11.378937   51251 cri.go:89] found id: ""
	I1018 17:44:11.378946   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:11.379001   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:11.382407   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:11.382471   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:11.407787   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:11.407806   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:11.407811   51251 cri.go:89] found id: ""
	I1018 17:44:11.407818   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:11.407902   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:11.411921   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:11.415171   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:11.415239   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:11.440964   51251 cri.go:89] found id: ""
	I1018 17:44:11.440986   51251 logs.go:282] 0 containers: []
	W1018 17:44:11.440995   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:11.441001   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:11.441056   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:11.470489   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:11.470512   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:11.470516   51251 cri.go:89] found id: ""
	I1018 17:44:11.470523   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:11.470579   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:11.474310   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:11.477884   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:11.477960   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:11.504799   51251 cri.go:89] found id: ""
	I1018 17:44:11.504862   51251 logs.go:282] 0 containers: []
	W1018 17:44:11.504885   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:11.504906   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:11.505006   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:11.533920   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:11.533983   51251 cri.go:89] found id: ""
	I1018 17:44:11.534003   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:11.534091   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:11.537702   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:11.537789   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:11.564923   51251 cri.go:89] found id: ""
	I1018 17:44:11.565058   51251 logs.go:282] 0 containers: []
	W1018 17:44:11.565068   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:11.565077   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:11.565089   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:11.576916   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:11.577027   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:11.644089   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:11.636599    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.637224    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.638751    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.639193    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.640642    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:11.636599    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.637224    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.638751    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.639193    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.640642    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:11.644109   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:11.644123   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:11.698636   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:11.698669   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:11.760923   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:11.760958   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:11.787821   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:11.787851   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:11.820451   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:11.820482   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:11.851416   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:11.851442   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:11.946634   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:11.946674   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:11.975802   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:11.975830   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:12.010031   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:12.010112   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:14.600286   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:14.611078   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:14.611145   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:14.638095   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:14.638116   51251 cri.go:89] found id: ""
	I1018 17:44:14.638124   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:14.638205   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:14.641787   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:14.641856   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:14.668881   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:14.668904   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:14.668910   51251 cri.go:89] found id: ""
	I1018 17:44:14.668918   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:14.669001   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:14.672474   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:14.675764   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:14.675840   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:14.699628   51251 cri.go:89] found id: ""
	I1018 17:44:14.699652   51251 logs.go:282] 0 containers: []
	W1018 17:44:14.699660   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:14.699666   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:14.699723   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:14.724155   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:14.724177   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:14.724182   51251 cri.go:89] found id: ""
	I1018 17:44:14.724190   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:14.724260   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:14.728073   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:14.731467   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:14.731534   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:14.757304   51251 cri.go:89] found id: ""
	I1018 17:44:14.757327   51251 logs.go:282] 0 containers: []
	W1018 17:44:14.757354   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:14.757361   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:14.757420   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:14.784778   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:14.784799   51251 cri.go:89] found id: ""
	I1018 17:44:14.784808   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:14.784862   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:14.788408   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:14.788477   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:14.819756   51251 cri.go:89] found id: ""
	I1018 17:44:14.819778   51251 logs.go:282] 0 containers: []
	W1018 17:44:14.819796   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:14.819805   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:14.819816   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:14.844668   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:14.844698   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:14.876534   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:14.876564   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:14.980256   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:14.980340   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:15.044346   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:15.044386   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:15.121677   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:15.121713   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:15.203393   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:15.203428   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:15.219368   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:15.219394   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:15.296726   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:15.289112    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.289522    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.291014    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.291333    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.292981    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:15.289112    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.289522    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.291014    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.291333    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.292981    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:15.296748   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:15.296761   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:15.322490   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:15.322516   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:15.364728   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:15.364760   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:17.892524   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:17.903413   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:17.903482   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:17.931967   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:17.931989   51251 cri.go:89] found id: ""
	I1018 17:44:17.931997   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:17.932052   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:17.935895   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:17.936007   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:17.983924   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:17.983945   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:17.983950   51251 cri.go:89] found id: ""
	I1018 17:44:17.983958   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:17.984014   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:17.987660   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:17.991127   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:17.991201   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:18.022803   51251 cri.go:89] found id: ""
	I1018 17:44:18.022827   51251 logs.go:282] 0 containers: []
	W1018 17:44:18.022836   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:18.022843   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:18.022906   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:18.064735   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:18.064754   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:18.064759   51251 cri.go:89] found id: ""
	I1018 17:44:18.064767   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:18.064823   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:18.068536   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:18.072878   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:18.072982   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:18.100206   51251 cri.go:89] found id: ""
	I1018 17:44:18.100237   51251 logs.go:282] 0 containers: []
	W1018 17:44:18.100246   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:18.100253   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:18.100321   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:18.127552   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:18.127575   51251 cri.go:89] found id: ""
	I1018 17:44:18.127584   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:18.127641   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:18.131667   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:18.131732   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:18.162707   51251 cri.go:89] found id: ""
	I1018 17:44:18.162731   51251 logs.go:282] 0 containers: []
	W1018 17:44:18.162739   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:18.162748   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:18.162763   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:18.246228   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:18.238684    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.239276    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.240721    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.241146    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.242608    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:18.238684    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.239276    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.240721    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.241146    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.242608    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:18.246250   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:18.246263   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:18.277740   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:18.277764   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:18.343394   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:18.343427   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:18.383823   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:18.383854   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:18.443389   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:18.443420   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:18.469522   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:18.469550   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:18.545455   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:18.545487   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:18.592352   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:18.592376   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:18.695698   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:18.695735   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:18.707163   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:18.707192   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:21.235420   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:21.245952   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:21.246019   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:21.271930   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:21.271997   51251 cri.go:89] found id: ""
	I1018 17:44:21.272019   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:21.272106   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:21.275968   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:21.276036   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:21.302979   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:21.302997   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:21.303001   51251 cri.go:89] found id: ""
	I1018 17:44:21.303008   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:21.303069   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:21.307879   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:21.311562   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:21.311627   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:21.339660   51251 cri.go:89] found id: ""
	I1018 17:44:21.339681   51251 logs.go:282] 0 containers: []
	W1018 17:44:21.339690   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:21.339695   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:21.339752   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:21.368389   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:21.368411   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:21.368416   51251 cri.go:89] found id: ""
	I1018 17:44:21.368424   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:21.368478   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:21.372383   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:21.375709   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:21.375779   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:21.401944   51251 cri.go:89] found id: ""
	I1018 17:44:21.402017   51251 logs.go:282] 0 containers: []
	W1018 17:44:21.402040   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:21.402058   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:21.402140   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:21.428284   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:21.428303   51251 cri.go:89] found id: ""
	I1018 17:44:21.428312   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:21.428392   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:21.432085   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:21.432163   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:21.456804   51251 cri.go:89] found id: ""
	I1018 17:44:21.456878   51251 logs.go:282] 0 containers: []
	W1018 17:44:21.456899   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:21.456922   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:21.456987   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:21.530466   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:21.522476    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.523226    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.524791    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.525409    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.526934    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:21.522476    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.523226    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.524791    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.525409    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.526934    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:21.530487   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:21.530500   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:21.583954   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:21.583988   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:21.624634   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:21.624667   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:21.683522   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:21.683555   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:21.712030   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:21.712058   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:21.743203   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:21.743227   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:21.823114   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:21.823149   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:21.854521   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:21.854548   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:21.957239   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:21.957276   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:21.974988   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:21.975013   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:24.514740   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:24.525668   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:24.525738   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:24.553057   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:24.553087   51251 cri.go:89] found id: ""
	I1018 17:44:24.553096   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:24.553152   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:24.556981   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:24.557053   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:24.583773   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:24.583796   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:24.583801   51251 cri.go:89] found id: ""
	I1018 17:44:24.583809   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:24.583864   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:24.587649   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:24.591283   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:24.591388   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:24.617918   51251 cri.go:89] found id: ""
	I1018 17:44:24.617940   51251 logs.go:282] 0 containers: []
	W1018 17:44:24.617949   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:24.617959   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:24.618025   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:24.643293   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:24.643319   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:24.643323   51251 cri.go:89] found id: ""
	I1018 17:44:24.643331   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:24.643391   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:24.647045   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:24.650422   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:24.650491   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:24.676556   51251 cri.go:89] found id: ""
	I1018 17:44:24.676629   51251 logs.go:282] 0 containers: []
	W1018 17:44:24.676652   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:24.676670   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:24.676753   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:24.703335   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:24.703354   51251 cri.go:89] found id: ""
	I1018 17:44:24.703362   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:24.703413   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:24.707043   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:24.707112   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:24.736770   51251 cri.go:89] found id: ""
	I1018 17:44:24.736793   51251 logs.go:282] 0 containers: []
	W1018 17:44:24.736802   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:24.736811   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:24.736821   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:24.831690   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:24.831725   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:24.845067   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:24.845094   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:24.915666   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:24.907247    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.907870    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.909378    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.910211    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.911689    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:24.907247    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.907870    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.909378    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.910211    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.911689    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:24.915715   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:24.915728   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:24.980758   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:24.980794   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:25.013913   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:25.013944   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:25.095710   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:25.095746   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:25.136366   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:25.136395   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:25.167081   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:25.167108   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:25.217068   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:25.217106   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:25.250444   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:25.250477   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:27.778976   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:27.789442   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:27.789511   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:27.816188   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:27.816211   51251 cri.go:89] found id: ""
	I1018 17:44:27.816219   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:27.816273   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:27.819794   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:27.819867   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:27.846400   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:27.846433   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:27.846439   51251 cri.go:89] found id: ""
	I1018 17:44:27.846461   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:27.846546   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:27.850346   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:27.853879   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:27.853956   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:27.880448   51251 cri.go:89] found id: ""
	I1018 17:44:27.880471   51251 logs.go:282] 0 containers: []
	W1018 17:44:27.880480   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:27.880486   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:27.880549   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:27.908354   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:27.908384   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:27.908389   51251 cri.go:89] found id: ""
	I1018 17:44:27.908397   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:27.908454   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:27.913635   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:27.917518   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:27.917589   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:27.944652   51251 cri.go:89] found id: ""
	I1018 17:44:27.944674   51251 logs.go:282] 0 containers: []
	W1018 17:44:27.944683   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:27.944689   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:27.944749   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:27.978127   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:27.978150   51251 cri.go:89] found id: ""
	I1018 17:44:27.978158   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:27.978217   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:27.982028   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:27.982097   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:28.010364   51251 cri.go:89] found id: ""
	I1018 17:44:28.010395   51251 logs.go:282] 0 containers: []
	W1018 17:44:28.010405   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:28.010414   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:28.010426   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:28.113197   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:28.113275   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:28.143438   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:28.143464   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:28.193919   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:28.193956   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:28.233324   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:28.233364   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:28.315086   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:28.315121   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:28.327446   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:28.327472   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:28.403227   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:28.392160    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.393002    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.395106    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.395823    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.397363    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:28.392160    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.393002    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.395106    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.395823    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.397363    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:28.403250   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:28.403262   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:28.467992   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:28.468024   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:28.495923   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:28.495947   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:28.526646   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:28.526674   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:31.058337   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:31.069976   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:31.070050   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:31.101306   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:31.101328   51251 cri.go:89] found id: ""
	I1018 17:44:31.101336   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:31.101399   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:31.105055   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:31.105128   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:31.142563   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:31.142588   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:31.142593   51251 cri.go:89] found id: ""
	I1018 17:44:31.142600   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:31.142662   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:31.146604   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:31.150365   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:31.150435   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:31.176760   51251 cri.go:89] found id: ""
	I1018 17:44:31.176785   51251 logs.go:282] 0 containers: []
	W1018 17:44:31.176793   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:31.176800   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:31.176894   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:31.209000   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:31.209022   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:31.209027   51251 cri.go:89] found id: ""
	I1018 17:44:31.209034   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:31.209092   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:31.213702   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:31.217030   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:31.217134   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:31.244577   51251 cri.go:89] found id: ""
	I1018 17:44:31.244600   51251 logs.go:282] 0 containers: []
	W1018 17:44:31.244608   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:31.244615   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:31.244694   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:31.276009   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:31.276030   51251 cri.go:89] found id: ""
	I1018 17:44:31.276037   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:31.276126   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:31.279948   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:31.280039   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:31.312074   51251 cri.go:89] found id: ""
	I1018 17:44:31.312098   51251 logs.go:282] 0 containers: []
	W1018 17:44:31.312108   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:31.312117   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:31.312146   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:31.374723   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:31.374758   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:31.402419   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:31.402446   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:31.430538   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:31.430564   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:31.512803   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:31.512837   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:31.614079   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:31.614114   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:31.681910   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:31.673049    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.673806    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.675573    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.676196    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.677982    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:31.673049    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.673806    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.675573    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.676196    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.677982    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:31.681935   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:31.681956   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:31.707698   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:31.707730   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:31.744929   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:31.745030   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:31.776082   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:31.776119   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:31.788990   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:31.789026   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:34.355514   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:34.366625   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:34.366689   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:34.394220   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:34.394241   51251 cri.go:89] found id: ""
	I1018 17:44:34.394249   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:34.394307   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:34.398229   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:34.398301   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:34.428966   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:34.428987   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:34.428991   51251 cri.go:89] found id: ""
	I1018 17:44:34.428999   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:34.429056   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:34.438000   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:34.443562   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:34.443638   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:34.470520   51251 cri.go:89] found id: ""
	I1018 17:44:34.470583   51251 logs.go:282] 0 containers: []
	W1018 17:44:34.470596   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:34.470603   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:34.470660   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:34.498015   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:34.498035   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:34.498040   51251 cri.go:89] found id: ""
	I1018 17:44:34.498047   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:34.498107   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:34.501820   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:34.505392   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:34.505508   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:34.531261   51251 cri.go:89] found id: ""
	I1018 17:44:34.531285   51251 logs.go:282] 0 containers: []
	W1018 17:44:34.531294   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:34.531301   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:34.531391   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:34.558417   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:34.558439   51251 cri.go:89] found id: ""
	I1018 17:44:34.558448   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:34.558506   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:34.562283   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:34.562397   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:34.589239   51251 cri.go:89] found id: ""
	I1018 17:44:34.589263   51251 logs.go:282] 0 containers: []
	W1018 17:44:34.589271   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:34.589280   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:34.589321   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:34.639508   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:34.639543   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:34.704073   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:34.704111   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:34.730079   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:34.730105   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:34.812757   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:34.812794   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:34.844323   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:34.844351   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:34.870994   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:34.871020   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:34.909712   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:34.909738   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:34.949435   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:34.949461   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:35.051363   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:35.051403   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:35.064297   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:35.064324   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:35.143040   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:35.134155    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.134888    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.136750    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.137513    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.139182    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:35.134155    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.134888    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.136750    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.137513    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.139182    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:37.644402   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:37.655473   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:37.655556   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:37.686712   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:37.686743   51251 cri.go:89] found id: ""
	I1018 17:44:37.686753   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:37.686818   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:37.690705   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:37.690780   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:37.717269   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:37.717288   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:37.717293   51251 cri.go:89] found id: ""
	I1018 17:44:37.717300   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:37.717365   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:37.721019   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:37.724434   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:37.724511   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:37.751507   51251 cri.go:89] found id: ""
	I1018 17:44:37.751529   51251 logs.go:282] 0 containers: []
	W1018 17:44:37.751548   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:37.751554   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:37.751612   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:37.780532   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:37.780550   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:37.780555   51251 cri.go:89] found id: ""
	I1018 17:44:37.780562   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:37.780620   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:37.784463   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:37.789038   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:37.789127   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:37.827207   51251 cri.go:89] found id: ""
	I1018 17:44:37.827234   51251 logs.go:282] 0 containers: []
	W1018 17:44:37.827243   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:37.827250   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:37.827328   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:37.854900   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:37.854962   51251 cri.go:89] found id: ""
	I1018 17:44:37.854986   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:37.855062   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:37.859902   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:37.859977   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:37.886300   51251 cri.go:89] found id: ""
	I1018 17:44:37.886365   51251 logs.go:282] 0 containers: []
	W1018 17:44:37.886388   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:37.886409   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:37.886446   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:37.984179   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:37.984212   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:38.054964   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:38.045702    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.046390    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.048099    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.048652    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.050343    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:38.045702    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.046390    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.048099    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.048652    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.050343    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:38.054994   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:38.055010   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:38.084660   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:38.084691   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:38.124518   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:38.124606   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:38.190852   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:38.190893   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:38.273991   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:38.274027   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:38.286517   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:38.286546   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:38.338543   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:38.338580   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:38.367716   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:38.367745   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:38.401155   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:38.401184   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:40.943389   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:40.954255   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:40.954330   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:40.990505   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:40.990526   51251 cri.go:89] found id: ""
	I1018 17:44:40.990535   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:40.990591   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:40.994301   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:40.994374   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:41.024101   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:41.024123   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:41.024128   51251 cri.go:89] found id: ""
	I1018 17:44:41.024135   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:41.024202   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:41.028135   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:41.031764   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:41.031846   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:41.058027   51251 cri.go:89] found id: ""
	I1018 17:44:41.058110   51251 logs.go:282] 0 containers: []
	W1018 17:44:41.058133   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:41.058154   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:41.058241   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:41.084363   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:41.084429   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:41.084447   51251 cri.go:89] found id: ""
	I1018 17:44:41.084468   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:41.084549   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:41.088275   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:41.091806   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:41.091872   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:41.119266   51251 cri.go:89] found id: ""
	I1018 17:44:41.119288   51251 logs.go:282] 0 containers: []
	W1018 17:44:41.119296   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:41.119302   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:41.119364   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:41.152142   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:41.152162   51251 cri.go:89] found id: ""
	I1018 17:44:41.152171   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:41.152233   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:41.155967   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:41.156039   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:41.183430   51251 cri.go:89] found id: ""
	I1018 17:44:41.183453   51251 logs.go:282] 0 containers: []
	W1018 17:44:41.183461   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:41.183470   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:41.183481   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:41.217575   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:41.217599   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:41.314633   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:41.314667   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:41.383386   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:41.373451    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.374006    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.375984    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.377691    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.379407    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:41.373451    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.374006    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.375984    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.377691    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.379407    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:41.383406   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:41.383419   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:41.446018   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:41.446089   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:41.488303   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:41.488335   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:41.520983   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:41.521012   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:41.604693   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:41.604726   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:41.638240   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:41.638266   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:41.649462   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:41.649486   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:41.674875   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:41.674902   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:44.238248   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:44.255175   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:44.255240   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:44.287509   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:44.287527   51251 cri.go:89] found id: ""
	I1018 17:44:44.287535   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:44.287592   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.292053   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:44.292125   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:44.323105   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:44.323123   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:44.323128   51251 cri.go:89] found id: ""
	I1018 17:44:44.323135   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:44.323191   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.327287   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.331002   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:44.331110   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:44.362329   51251 cri.go:89] found id: ""
	I1018 17:44:44.362393   51251 logs.go:282] 0 containers: []
	W1018 17:44:44.362415   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:44.362436   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:44.362517   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:44.393314   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:44.393384   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:44.393403   51251 cri.go:89] found id: ""
	I1018 17:44:44.393432   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:44.393510   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.397610   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.401568   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:44.401674   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:44.439288   51251 cri.go:89] found id: ""
	I1018 17:44:44.439350   51251 logs.go:282] 0 containers: []
	W1018 17:44:44.439370   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:44.439391   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:44.439473   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:44.477857   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:44.477920   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:44.477939   51251 cri.go:89] found id: ""
	I1018 17:44:44.477960   51251 logs.go:282] 2 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:44.478038   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.482903   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.487434   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:44.487551   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:44.527686   51251 cri.go:89] found id: ""
	I1018 17:44:44.527761   51251 logs.go:282] 0 containers: []
	W1018 17:44:44.527784   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:44.527823   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:44.527850   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:44.637841   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:44.637917   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:44.653818   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:44.653846   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:44.762008   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:44.751907    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.753161    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.755038    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.755967    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.757158    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:44.751907    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.753161    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.755038    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.755967    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.757158    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:44.762038   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:44.762067   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:44.798868   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:44.798900   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:44.850591   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:44.850634   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:44.938420   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:44:44.938472   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:44.980294   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:44.980372   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:45.089048   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:45.089096   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:45.196420   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:45.196522   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:45.246623   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:45.246803   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:45.295911   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:45.295955   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:47.851142   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:47.862455   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:47.862520   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:47.888902   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:47.888970   51251 cri.go:89] found id: ""
	I1018 17:44:47.888984   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:47.889042   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:47.893115   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:47.893208   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:47.923068   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:47.923087   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:47.923091   51251 cri.go:89] found id: ""
	I1018 17:44:47.923099   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:47.923170   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:47.927351   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:47.931468   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:47.931541   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:47.958620   51251 cri.go:89] found id: ""
	I1018 17:44:47.958642   51251 logs.go:282] 0 containers: []
	W1018 17:44:47.958651   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:47.958657   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:47.958717   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:47.988421   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:47.988494   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:47.988514   51251 cri.go:89] found id: ""
	I1018 17:44:47.988534   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:47.988616   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:47.992743   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:47.996667   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:47.996742   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:48.025533   51251 cri.go:89] found id: ""
	I1018 17:44:48.025560   51251 logs.go:282] 0 containers: []
	W1018 17:44:48.025568   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:48.025575   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:48.025654   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:48.053974   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:48.053997   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:48.054002   51251 cri.go:89] found id: ""
	I1018 17:44:48.054009   51251 logs.go:282] 2 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:48.054070   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:48.057945   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:48.061877   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:48.061953   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:48.090761   51251 cri.go:89] found id: ""
	I1018 17:44:48.090786   51251 logs.go:282] 0 containers: []
	W1018 17:44:48.090795   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:48.090805   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:48.090817   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:48.189723   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:48.189756   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:48.221709   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:48.221739   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:48.259440   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:48.259470   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:48.345516   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:48.345553   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:48.374446   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:48.374477   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:48.460806   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:48.460842   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:48.473713   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:48.473739   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:48.554183   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:48.545515    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.546813    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.547313    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.548898    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.549566    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:48.545515    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.546813    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.547313    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.548898    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.549566    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:48.554204   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:48.554217   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:48.609158   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:44:48.609190   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:48.636984   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:48.637062   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:48.664743   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:48.664822   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:51.198411   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:51.210016   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:51.210081   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:51.236981   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:51.237004   51251 cri.go:89] found id: ""
	I1018 17:44:51.237012   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:51.237077   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.240676   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:51.240750   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:51.269356   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:51.269382   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:51.269387   51251 cri.go:89] found id: ""
	I1018 17:44:51.269395   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:51.269453   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.273122   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.277060   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:51.277132   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:51.304766   51251 cri.go:89] found id: ""
	I1018 17:44:51.304790   51251 logs.go:282] 0 containers: []
	W1018 17:44:51.304799   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:51.304805   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:51.304865   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:51.332379   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:51.332401   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:51.332406   51251 cri.go:89] found id: ""
	I1018 17:44:51.332414   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:51.332474   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.336518   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.341898   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:51.341976   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:51.367678   51251 cri.go:89] found id: ""
	I1018 17:44:51.367708   51251 logs.go:282] 0 containers: []
	W1018 17:44:51.367726   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:51.367732   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:51.367796   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:51.394153   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:51.394175   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:51.394180   51251 cri.go:89] found id: ""
	I1018 17:44:51.394187   51251 logs.go:282] 2 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:51.394243   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.397993   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.401471   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:51.401578   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:51.428758   51251 cri.go:89] found id: ""
	I1018 17:44:51.428822   51251 logs.go:282] 0 containers: []
	W1018 17:44:51.428844   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:51.428870   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:51.428894   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:51.503688   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:51.495917    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.496423    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.498141    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.498547    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.500003    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:51.495917    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.496423    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.498141    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.498547    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.500003    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:51.503709   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:51.503722   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:51.532853   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:51.532878   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:51.596823   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:44:51.596858   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:51.623499   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:51.623527   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:51.653511   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:51.653538   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:51.743235   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:51.743280   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:51.775603   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:51.775632   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:51.875854   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:51.875890   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:51.893446   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:51.893471   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:51.928284   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:51.928316   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:51.997158   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:51.997193   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:54.531254   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:54.544073   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:54.544143   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:54.572505   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:54.572526   51251 cri.go:89] found id: ""
	I1018 17:44:54.572534   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:54.572589   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.576276   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:54.576349   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:54.608530   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:54.608552   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:54.608557   51251 cri.go:89] found id: ""
	I1018 17:44:54.608564   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:54.608620   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.612802   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.616507   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:54.616574   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:54.646887   51251 cri.go:89] found id: ""
	I1018 17:44:54.646909   51251 logs.go:282] 0 containers: []
	W1018 17:44:54.646918   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:54.646924   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:54.646985   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:54.673624   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:54.673641   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:54.673646   51251 cri.go:89] found id: ""
	I1018 17:44:54.673653   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:54.673708   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.677580   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.680915   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:54.681039   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:54.707856   51251 cri.go:89] found id: ""
	I1018 17:44:54.707882   51251 logs.go:282] 0 containers: []
	W1018 17:44:54.707890   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:54.707897   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:54.707985   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:54.739572   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:54.739596   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:54.739602   51251 cri.go:89] found id: ""
	I1018 17:44:54.739609   51251 logs.go:282] 2 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:54.739666   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.744278   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.747740   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:54.747812   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:54.786379   51251 cri.go:89] found id: ""
	I1018 17:44:54.786405   51251 logs.go:282] 0 containers: []
	W1018 17:44:54.786413   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:54.786423   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:54.786435   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:54.850541   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:54.850577   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:54.878112   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:54.878139   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:54.905434   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:54.905462   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:54.983610   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:54.974914    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.975800    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.977585    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.978207    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.979920    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:54.974914    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.975800    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.977585    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.978207    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.979920    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:54.983631   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:44:54.983643   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:55.018119   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:55.018148   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:55.096411   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:55.096446   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:55.134900   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:55.134926   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:55.237181   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:55.237214   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:55.250828   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:55.250858   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:55.281899   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:55.281928   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:55.339174   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:55.339208   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:57.880428   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:57.891159   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:57.891231   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:57.921966   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:57.921988   51251 cri.go:89] found id: ""
	I1018 17:44:57.921996   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:57.922051   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:57.925877   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:57.925946   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:57.983701   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:57.983719   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:57.983724   51251 cri.go:89] found id: ""
	I1018 17:44:57.983731   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:57.983785   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:57.988147   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:57.991948   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:57.992055   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:58.027455   51251 cri.go:89] found id: ""
	I1018 17:44:58.027489   51251 logs.go:282] 0 containers: []
	W1018 17:44:58.027498   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:58.027504   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:58.027572   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:58.061874   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:58.061896   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:58.061902   51251 cri.go:89] found id: ""
	I1018 17:44:58.061911   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:58.061971   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:58.065752   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:58.069525   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:58.069600   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:58.099676   51251 cri.go:89] found id: ""
	I1018 17:44:58.099698   51251 logs.go:282] 0 containers: []
	W1018 17:44:58.099707   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:58.099720   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:58.099778   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:58.132718   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:58.132740   51251 cri.go:89] found id: ""
	I1018 17:44:58.132748   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:44:58.132803   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:58.136641   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:58.136718   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:58.161767   51251 cri.go:89] found id: ""
	I1018 17:44:58.161791   51251 logs.go:282] 0 containers: []
	W1018 17:44:58.161799   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:58.161808   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:58.161820   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:58.239848   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:58.231755    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.232488    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.234323    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.234970    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.236249    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:58.231755    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.232488    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.234323    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.234970    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.236249    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:58.239867   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:58.239879   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:58.265229   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:58.265253   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:58.316459   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:58.316495   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:58.382736   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:58.382771   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:58.461400   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:58.461435   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:58.496880   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:58.496905   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:58.600326   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:58.600360   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:58.612833   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:58.612860   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:58.652792   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:58.652823   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:58.683598   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:44:58.683624   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:01.209276   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:01.221741   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:01.221825   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:01.255998   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:01.256020   51251 cri.go:89] found id: ""
	I1018 17:45:01.256029   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:01.256090   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:01.260323   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:01.260410   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:01.290623   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:01.290646   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:01.290652   51251 cri.go:89] found id: ""
	I1018 17:45:01.290660   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:01.290722   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:01.294923   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:01.299340   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:01.299421   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:01.328205   51251 cri.go:89] found id: ""
	I1018 17:45:01.328234   51251 logs.go:282] 0 containers: []
	W1018 17:45:01.328244   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:01.328251   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:01.328321   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:01.360099   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:01.360123   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:01.360128   51251 cri.go:89] found id: ""
	I1018 17:45:01.360136   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:01.360209   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:01.364283   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:01.368572   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:01.368657   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:01.397092   51251 cri.go:89] found id: ""
	I1018 17:45:01.397161   51251 logs.go:282] 0 containers: []
	W1018 17:45:01.397184   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:01.397207   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:01.397297   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:01.426452   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:01.426520   51251 cri.go:89] found id: ""
	I1018 17:45:01.426537   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:01.426623   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:01.430959   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:01.431090   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:01.460044   51251 cri.go:89] found id: ""
	I1018 17:45:01.460085   51251 logs.go:282] 0 containers: []
	W1018 17:45:01.460095   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:01.460126   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:01.460171   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:01.536047   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:01.536083   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:01.548838   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:01.548870   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:01.581436   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:01.581464   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:01.639347   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:01.639384   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:01.667540   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:01.667571   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:01.714304   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:01.714330   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:01.813430   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:01.813510   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:01.882898   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:01.873459    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.874354    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.876306    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.877166    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.878779    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:01.873459    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.874354    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.876306    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.877166    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.878779    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:01.882921   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:01.882937   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:01.917303   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:01.917407   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:01.999403   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:01.999445   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:04.533522   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:04.544111   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:04.544187   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:04.570770   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:04.570840   51251 cri.go:89] found id: ""
	I1018 17:45:04.570855   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:04.570912   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:04.575103   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:04.575198   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:04.609501   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:04.609532   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:04.609537   51251 cri.go:89] found id: ""
	I1018 17:45:04.609545   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:04.609600   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:04.613955   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:04.617439   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:04.617516   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:04.645280   51251 cri.go:89] found id: ""
	I1018 17:45:04.645306   51251 logs.go:282] 0 containers: []
	W1018 17:45:04.645315   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:04.645324   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:04.645392   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:04.672130   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:04.672153   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:04.672158   51251 cri.go:89] found id: ""
	I1018 17:45:04.672167   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:04.672223   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:04.676297   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:04.681021   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:04.681099   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:04.707420   51251 cri.go:89] found id: ""
	I1018 17:45:04.707444   51251 logs.go:282] 0 containers: []
	W1018 17:45:04.707452   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:04.707461   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:04.707517   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:04.737533   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:04.737555   51251 cri.go:89] found id: ""
	I1018 17:45:04.737565   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:04.737631   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:04.741271   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:04.741342   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:04.767657   51251 cri.go:89] found id: ""
	I1018 17:45:04.767681   51251 logs.go:282] 0 containers: []
	W1018 17:45:04.767689   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:04.767699   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:04.767710   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:04.863553   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:04.863587   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:04.875569   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:04.875600   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:04.930436   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:04.930476   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:04.969240   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:04.969276   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:05.039302   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:05.039336   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:05.067077   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:05.067103   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:05.148387   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:05.148422   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:05.223337   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:05.215470    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.216065    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.217641    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.218213    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.219737    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:05.215470    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.216065    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.217641    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.218213    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.219737    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:05.223369   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:05.223382   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:05.249066   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:05.249091   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:05.280440   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:05.280465   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:07.817192   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:07.827427   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:07.827497   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:07.853178   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:07.853198   51251 cri.go:89] found id: ""
	I1018 17:45:07.853206   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:07.853261   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:07.857004   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:07.857072   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:07.882619   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:07.882640   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:07.882645   51251 cri.go:89] found id: ""
	I1018 17:45:07.882652   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:07.882716   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:07.886518   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:07.890146   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:07.890220   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:07.917313   51251 cri.go:89] found id: ""
	I1018 17:45:07.917338   51251 logs.go:282] 0 containers: []
	W1018 17:45:07.917351   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:07.917358   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:07.917421   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:07.950191   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:07.950218   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:07.950223   51251 cri.go:89] found id: ""
	I1018 17:45:07.950234   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:07.950304   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:07.953933   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:07.957694   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:07.957770   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:07.990144   51251 cri.go:89] found id: ""
	I1018 17:45:07.990167   51251 logs.go:282] 0 containers: []
	W1018 17:45:07.990176   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:07.990183   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:07.990240   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:08.023638   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:08.023660   51251 cri.go:89] found id: ""
	I1018 17:45:08.023669   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:08.023729   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:08.028231   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:08.028307   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:08.056653   51251 cri.go:89] found id: ""
	I1018 17:45:08.056678   51251 logs.go:282] 0 containers: []
	W1018 17:45:08.056687   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:08.056696   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:08.056708   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:08.132641   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:08.122188    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.122913    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.124506    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.124806    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.126307    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:08.122188    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.122913    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.124506    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.124806    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.126307    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:08.132662   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:08.132677   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:08.197105   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:08.197143   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:08.238131   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:08.238157   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:08.266672   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:08.266701   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:08.302562   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:08.302587   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:08.411059   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:08.411103   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:08.423232   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:08.423261   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:08.449524   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:08.449549   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:08.505779   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:08.505811   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:08.540674   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:08.540708   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:11.118218   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:11.130399   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:11.130521   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:11.164618   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:11.164637   51251 cri.go:89] found id: ""
	I1018 17:45:11.164644   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:11.164700   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:11.168380   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:11.168453   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:11.195034   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:11.195059   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:11.195065   51251 cri.go:89] found id: ""
	I1018 17:45:11.195072   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:11.195126   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:11.199134   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:11.203492   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:11.203557   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:11.230659   51251 cri.go:89] found id: ""
	I1018 17:45:11.230681   51251 logs.go:282] 0 containers: []
	W1018 17:45:11.230689   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:11.230697   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:11.230773   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:11.256814   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:11.256842   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:11.256847   51251 cri.go:89] found id: ""
	I1018 17:45:11.256855   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:11.256973   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:11.260554   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:11.263940   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:11.264009   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:11.289036   51251 cri.go:89] found id: ""
	I1018 17:45:11.289114   51251 logs.go:282] 0 containers: []
	W1018 17:45:11.289128   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:11.289134   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:11.289192   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:11.320844   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:11.320867   51251 cri.go:89] found id: ""
	I1018 17:45:11.320875   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:11.320928   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:11.324471   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:11.324537   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:11.350002   51251 cri.go:89] found id: ""
	I1018 17:45:11.350028   51251 logs.go:282] 0 containers: []
	W1018 17:45:11.350036   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:11.350045   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:11.350057   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:11.415699   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:11.407276    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.408085    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.409925    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.410627    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.412208    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:11.407276    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.408085    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.409925    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.410627    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.412208    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:11.415719   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:11.415732   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:11.467144   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:11.467178   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:11.500116   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:11.500149   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:11.565053   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:11.565083   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:11.594806   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:11.594833   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:11.621385   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:11.621416   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:11.649391   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:11.649418   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:11.681270   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:11.681294   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:11.758017   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:11.758049   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:11.856363   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:11.856394   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:14.369690   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:14.380482   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:14.380582   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:14.406908   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:14.406929   51251 cri.go:89] found id: ""
	I1018 17:45:14.406937   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:14.406991   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:14.410922   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:14.410995   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:14.438715   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:14.438787   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:14.438805   51251 cri.go:89] found id: ""
	I1018 17:45:14.438825   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:14.438910   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:14.442634   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:14.446455   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:14.446583   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:14.472662   51251 cri.go:89] found id: ""
	I1018 17:45:14.472729   51251 logs.go:282] 0 containers: []
	W1018 17:45:14.472740   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:14.472749   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:14.472837   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:14.499722   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:14.499787   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:14.499804   51251 cri.go:89] found id: ""
	I1018 17:45:14.499826   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:14.499910   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:14.503638   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:14.507247   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:14.507364   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:14.534947   51251 cri.go:89] found id: ""
	I1018 17:45:14.534973   51251 logs.go:282] 0 containers: []
	W1018 17:45:14.534981   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:14.534987   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:14.535064   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:14.561664   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:14.561686   51251 cri.go:89] found id: ""
	I1018 17:45:14.561695   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:14.561753   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:14.565710   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:14.565806   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:14.595947   51251 cri.go:89] found id: ""
	I1018 17:45:14.595972   51251 logs.go:282] 0 containers: []
	W1018 17:45:14.595980   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:14.595990   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:14.596029   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:14.671772   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:14.671807   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:14.775531   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:14.775566   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:14.787782   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:14.787811   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:14.819786   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:14.819816   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:14.851924   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:14.851951   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:14.920046   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:14.911958    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.912762    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.914424    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.914744    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.916204    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:14.911958    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.912762    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.914424    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.914744    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.916204    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:14.920119   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:14.920139   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:14.977739   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:14.977775   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:15.032058   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:15.032091   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:15.102494   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:15.102529   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:15.138731   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:15.138757   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:17.666030   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:17.676690   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:17.676760   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:17.703559   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:17.703578   51251 cri.go:89] found id: ""
	I1018 17:45:17.703585   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:17.703638   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:17.707859   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:17.707930   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:17.735399   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:17.735422   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:17.735433   51251 cri.go:89] found id: ""
	I1018 17:45:17.735441   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:17.735498   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:17.739407   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:17.742711   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:17.742782   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:17.773860   51251 cri.go:89] found id: ""
	I1018 17:45:17.773930   51251 logs.go:282] 0 containers: []
	W1018 17:45:17.773946   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:17.773953   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:17.774014   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:17.800989   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:17.801015   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:17.801021   51251 cri.go:89] found id: ""
	I1018 17:45:17.801028   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:17.801094   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:17.805064   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:17.808714   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:17.808845   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:17.835041   51251 cri.go:89] found id: ""
	I1018 17:45:17.835065   51251 logs.go:282] 0 containers: []
	W1018 17:45:17.835073   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:17.835080   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:17.835141   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:17.866314   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:17.866337   51251 cri.go:89] found id: ""
	I1018 17:45:17.866345   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:17.866406   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:17.870038   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:17.870110   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:17.895894   51251 cri.go:89] found id: ""
	I1018 17:45:17.895916   51251 logs.go:282] 0 containers: []
	W1018 17:45:17.895925   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:17.895934   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:17.895945   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:17.998692   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:17.998766   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:18.015153   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:18.015182   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:18.068223   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:18.068259   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:18.154314   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:18.154356   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:18.243477   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:18.234737    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.235447    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.237270    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.237840    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.239403    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:18.234737    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.235447    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.237270    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.237840    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.239403    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:18.243497   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:18.243509   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:18.275940   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:18.275970   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:18.316930   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:18.316995   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:18.389081   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:18.389116   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:18.418930   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:18.418956   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:18.449161   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:18.449188   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:20.980259   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:20.991356   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:20.991427   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:21.028373   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:21.028396   51251 cri.go:89] found id: ""
	I1018 17:45:21.028404   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:21.028462   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:21.031989   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:21.032060   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:21.061105   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:21.061126   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:21.061138   51251 cri.go:89] found id: ""
	I1018 17:45:21.061147   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:21.061206   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:21.064983   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:21.068555   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:21.068622   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:21.095318   51251 cri.go:89] found id: ""
	I1018 17:45:21.095340   51251 logs.go:282] 0 containers: []
	W1018 17:45:21.095348   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:21.095354   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:21.095410   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:21.132132   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:21.132167   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:21.132172   51251 cri.go:89] found id: ""
	I1018 17:45:21.132195   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:21.132278   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:21.136778   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:21.140214   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:21.140288   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:21.172583   51251 cri.go:89] found id: ""
	I1018 17:45:21.172605   51251 logs.go:282] 0 containers: []
	W1018 17:45:21.172614   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:21.172620   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:21.172675   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:21.203092   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:21.203113   51251 cri.go:89] found id: ""
	I1018 17:45:21.203121   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:21.203176   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:21.207592   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:21.207657   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:21.235546   51251 cri.go:89] found id: ""
	I1018 17:45:21.235570   51251 logs.go:282] 0 containers: []
	W1018 17:45:21.235580   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:21.235589   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:21.235635   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:21.332614   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:21.332652   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:21.360929   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:21.361068   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:21.401211   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:21.401249   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:21.468558   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:21.468594   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:21.498171   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:21.498196   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:21.576112   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:21.576147   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:21.607742   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:21.607775   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:21.619918   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:21.619943   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:21.687350   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:21.679038    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.679743    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.681303    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.681885    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.683555    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:21.679038    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.679743    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.681303    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.681885    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.683555    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:21.687371   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:21.687384   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:21.742021   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:21.742057   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:24.270296   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:24.281336   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:24.281412   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:24.310155   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:24.310176   51251 cri.go:89] found id: ""
	I1018 17:45:24.310184   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:24.310236   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:24.314848   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:24.314949   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:24.343101   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:24.343140   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:24.343146   51251 cri.go:89] found id: ""
	I1018 17:45:24.343154   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:24.343214   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:24.347137   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:24.350301   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:24.350364   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:24.375739   51251 cri.go:89] found id: ""
	I1018 17:45:24.375763   51251 logs.go:282] 0 containers: []
	W1018 17:45:24.375774   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:24.375787   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:24.375845   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:24.414912   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:24.414933   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:24.414944   51251 cri.go:89] found id: ""
	I1018 17:45:24.414952   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:24.415006   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:24.419585   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:24.423104   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:24.423211   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:24.449615   51251 cri.go:89] found id: ""
	I1018 17:45:24.449639   51251 logs.go:282] 0 containers: []
	W1018 17:45:24.449647   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:24.449653   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:24.449709   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:24.476036   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:24.476057   51251 cri.go:89] found id: ""
	I1018 17:45:24.476065   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:24.476126   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:24.479757   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:24.479825   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:24.512386   51251 cri.go:89] found id: ""
	I1018 17:45:24.512409   51251 logs.go:282] 0 containers: []
	W1018 17:45:24.512417   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:24.512426   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:24.512438   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:24.538617   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:24.538645   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:24.592949   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:24.592984   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:24.621215   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:24.621242   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:24.697575   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:24.697611   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:24.769130   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:24.760873    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.761713    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.763257    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.763723    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.765324    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:24.760873    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.761713    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.763257    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.763723    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.765324    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:24.769206   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:24.769228   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:24.807477   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:24.807508   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:24.880464   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:24.880506   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:24.913114   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:24.913140   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:24.946306   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:24.946335   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:25.051970   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:25.052004   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:27.565286   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:27.576658   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:27.576726   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:27.613181   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:27.613202   51251 cri.go:89] found id: ""
	I1018 17:45:27.613210   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:27.613264   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:27.617394   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:27.617462   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:27.645391   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:27.645413   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:27.645418   51251 cri.go:89] found id: ""
	I1018 17:45:27.645426   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:27.645494   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:27.649249   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:27.652792   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:27.652866   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:27.679303   51251 cri.go:89] found id: ""
	I1018 17:45:27.679368   51251 logs.go:282] 0 containers: []
	W1018 17:45:27.679390   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:27.679408   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:27.679492   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:27.705387   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:27.705453   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:27.705466   51251 cri.go:89] found id: ""
	I1018 17:45:27.705475   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:27.705532   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:27.709305   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:27.713679   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:27.713761   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:27.740178   51251 cri.go:89] found id: ""
	I1018 17:45:27.740203   51251 logs.go:282] 0 containers: []
	W1018 17:45:27.740211   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:27.740218   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:27.740277   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:27.768320   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:27.768342   51251 cri.go:89] found id: ""
	I1018 17:45:27.768351   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:27.768416   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:27.772360   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:27.772471   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:27.797997   51251 cri.go:89] found id: ""
	I1018 17:45:27.798018   51251 logs.go:282] 0 containers: []
	W1018 17:45:27.798026   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:27.798049   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:27.798061   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:27.824302   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:27.824379   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:27.859099   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:27.859131   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:27.889803   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:27.889830   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:27.902196   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:27.902221   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:27.958924   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:27.958960   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:28.038453   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:28.038489   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:28.067717   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:28.067748   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:28.156959   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:28.156998   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:28.189533   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:28.189561   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:28.296814   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:28.296848   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:28.370306   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:28.360661    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.362171    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.362714    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.364316    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.364866    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:28.360661    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.362171    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.362714    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.364316    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.364866    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:30.870515   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:30.881788   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:30.881863   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:30.910070   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:30.910091   51251 cri.go:89] found id: ""
	I1018 17:45:30.910099   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:30.910154   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:30.914699   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:30.914767   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:30.944925   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:30.944970   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:30.944975   51251 cri.go:89] found id: ""
	I1018 17:45:30.944982   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:30.945037   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:30.948747   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:30.954312   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:30.954375   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:30.992317   51251 cri.go:89] found id: ""
	I1018 17:45:30.992339   51251 logs.go:282] 0 containers: []
	W1018 17:45:30.992347   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:30.992353   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:30.992409   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:31.020830   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:31.020849   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:31.020853   51251 cri.go:89] found id: ""
	I1018 17:45:31.020860   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:31.020918   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:31.025302   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:31.028979   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:31.029048   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:31.066137   51251 cri.go:89] found id: ""
	I1018 17:45:31.066238   51251 logs.go:282] 0 containers: []
	W1018 17:45:31.066262   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:31.066295   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:31.066401   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:31.093628   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:31.093651   51251 cri.go:89] found id: ""
	I1018 17:45:31.093659   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:31.093747   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:31.097751   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:31.097830   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:31.126496   51251 cri.go:89] found id: ""
	I1018 17:45:31.126517   51251 logs.go:282] 0 containers: []
	W1018 17:45:31.126526   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:31.126535   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:31.126547   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:31.199157   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:31.190529    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.191738    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.193086    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.193754    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.195583    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:31.190529    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.191738    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.193086    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.193754    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.195583    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:31.199180   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:31.199192   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:31.227645   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:31.227672   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:31.299176   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:31.299211   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:31.331846   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:31.331870   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:31.408603   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:31.408637   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:31.443678   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:31.443708   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:31.543336   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:31.543370   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:31.584237   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:31.584267   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:31.657778   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:31.657815   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:31.687304   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:31.687331   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:34.200278   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:34.213848   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:34.213915   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:34.240838   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:34.240860   51251 cri.go:89] found id: ""
	I1018 17:45:34.240874   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:34.240930   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:34.244825   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:34.244901   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:34.271020   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:34.271040   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:34.271044   51251 cri.go:89] found id: ""
	I1018 17:45:34.271052   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:34.271106   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:34.274974   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:34.278648   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:34.278748   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:34.306959   51251 cri.go:89] found id: ""
	I1018 17:45:34.306980   51251 logs.go:282] 0 containers: []
	W1018 17:45:34.306988   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:34.307023   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:34.307092   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:34.332551   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:34.332573   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:34.332578   51251 cri.go:89] found id: ""
	I1018 17:45:34.332585   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:34.332641   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:34.336514   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:34.340414   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:34.340491   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:34.366530   51251 cri.go:89] found id: ""
	I1018 17:45:34.366556   51251 logs.go:282] 0 containers: []
	W1018 17:45:34.366566   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:34.366572   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:34.366633   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:34.393555   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:34.393573   51251 cri.go:89] found id: ""
	I1018 17:45:34.393581   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:34.393637   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:34.397566   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:34.397635   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:34.424542   51251 cri.go:89] found id: ""
	I1018 17:45:34.424566   51251 logs.go:282] 0 containers: []
	W1018 17:45:34.424575   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:34.424584   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:34.424595   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:34.436112   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:34.436137   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:34.507631   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:34.499819    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.500689    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.501741    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.502269    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.503964    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:34.499819    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.500689    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.501741    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.502269    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.503964    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:34.507654   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:34.507666   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:34.562029   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:34.562062   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:34.599739   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:34.599770   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:34.628468   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:34.628493   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:34.702022   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:34.702053   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:34.731823   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:34.731851   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:34.830492   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:34.830526   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:34.860325   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:34.860350   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:34.928523   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:34.928564   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:37.460864   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:37.472124   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:37.472190   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:37.499832   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:37.499854   51251 cri.go:89] found id: ""
	I1018 17:45:37.499862   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:37.499920   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:37.503595   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:37.503663   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:37.531543   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:37.531563   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:37.531569   51251 cri.go:89] found id: ""
	I1018 17:45:37.531576   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:37.531630   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:37.535265   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:37.538643   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:37.538712   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:37.565328   51251 cri.go:89] found id: ""
	I1018 17:45:37.565359   51251 logs.go:282] 0 containers: []
	W1018 17:45:37.565368   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:37.565374   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:37.565434   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:37.602468   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:37.602489   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:37.602494   51251 cri.go:89] found id: ""
	I1018 17:45:37.602501   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:37.602557   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:37.606311   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:37.609849   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:37.609919   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:37.640018   51251 cri.go:89] found id: ""
	I1018 17:45:37.640087   51251 logs.go:282] 0 containers: []
	W1018 17:45:37.640110   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:37.640131   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:37.640216   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:37.666232   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:37.666305   51251 cri.go:89] found id: ""
	I1018 17:45:37.666334   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:37.666402   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:37.669826   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:37.669905   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:37.696068   51251 cri.go:89] found id: ""
	I1018 17:45:37.696104   51251 logs.go:282] 0 containers: []
	W1018 17:45:37.696112   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:37.696121   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:37.696158   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:37.767014   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:37.767049   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:37.799133   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:37.799158   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:37.883995   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:37.884029   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:37.919112   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:37.919145   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:37.968245   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:37.968269   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:38.008695   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:38.008740   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:38.109431   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:38.109506   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:38.124458   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:38.124529   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:38.217277   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:38.191743    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.192499    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.207164    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.208077    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.209702    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:38.191743    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.192499    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.207164    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.208077    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.209702    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:38.217297   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:38.217310   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:38.247001   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:38.247027   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:40.816985   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:40.827390   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:40.827474   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:40.854344   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:40.854363   51251 cri.go:89] found id: ""
	I1018 17:45:40.854371   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:40.854426   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:40.858780   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:40.858879   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:40.888649   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:40.888707   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:40.888726   51251 cri.go:89] found id: ""
	I1018 17:45:40.888754   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:40.888823   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:40.893141   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:40.897039   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:40.897111   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:40.930280   51251 cri.go:89] found id: ""
	I1018 17:45:40.930304   51251 logs.go:282] 0 containers: []
	W1018 17:45:40.930313   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:40.930319   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:40.930375   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:40.957741   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:40.957764   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:40.957769   51251 cri.go:89] found id: ""
	I1018 17:45:40.957777   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:40.957854   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:40.962938   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:40.967322   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:40.967388   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:40.995139   51251 cri.go:89] found id: ""
	I1018 17:45:40.995216   51251 logs.go:282] 0 containers: []
	W1018 17:45:40.995230   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:40.995237   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:40.995304   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:41.025259   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:41.025280   51251 cri.go:89] found id: ""
	I1018 17:45:41.025287   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:41.025344   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:41.029459   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:41.029553   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:41.055678   51251 cri.go:89] found id: ""
	I1018 17:45:41.055710   51251 logs.go:282] 0 containers: []
	W1018 17:45:41.055719   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:41.055728   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:41.055745   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:41.097365   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:41.097395   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:41.108644   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:41.108669   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:41.152656   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:41.152685   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:41.240199   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:41.240234   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:41.347931   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:41.347967   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:41.414489   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:41.405260    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.405872    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.407642    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.408232    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.410751    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:41.405260    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.405872    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.407642    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.408232    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.410751    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:41.414511   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:41.414525   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:41.440777   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:41.440802   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:41.496567   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:41.496602   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:41.569402   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:41.569445   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:41.599116   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:41.599143   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:44.128092   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:44.139312   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:44.139380   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:44.166514   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:44.166533   51251 cri.go:89] found id: ""
	I1018 17:45:44.166541   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:44.166596   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:44.170245   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:44.170317   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:44.210379   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:44.210397   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:44.210402   51251 cri.go:89] found id: ""
	I1018 17:45:44.210410   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:44.210464   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:44.214239   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:44.217585   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:44.217650   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:44.242978   51251 cri.go:89] found id: ""
	I1018 17:45:44.243001   51251 logs.go:282] 0 containers: []
	W1018 17:45:44.243009   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:44.243016   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:44.243069   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:44.270660   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:44.270680   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:44.270685   51251 cri.go:89] found id: ""
	I1018 17:45:44.270692   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:44.270746   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:44.274435   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:44.278022   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:44.278090   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:44.314849   51251 cri.go:89] found id: ""
	I1018 17:45:44.314873   51251 logs.go:282] 0 containers: []
	W1018 17:45:44.314881   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:44.314887   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:44.314951   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:44.345002   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:44.345025   51251 cri.go:89] found id: ""
	I1018 17:45:44.345034   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:44.345091   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:44.348718   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:44.348785   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:44.373779   51251 cri.go:89] found id: ""
	I1018 17:45:44.373804   51251 logs.go:282] 0 containers: []
	W1018 17:45:44.373812   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:44.373828   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:44.373839   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:44.448448   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:44.448482   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:44.479822   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:44.479848   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:44.583615   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:44.583649   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:44.597191   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:44.597217   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:44.623357   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:44.623385   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:44.680939   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:44.680970   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:44.715142   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:44.715173   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:44.742106   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:44.742133   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:44.808539   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:44.799128    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.799968    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.801462    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.801790    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.803327    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:44.799128    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.799968    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.801462    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.801790    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.803327    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:44.808609   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:44.808640   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:44.878644   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:44.878682   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:47.415612   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:47.426226   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:47.426291   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:47.453489   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:47.453509   51251 cri.go:89] found id: ""
	I1018 17:45:47.453517   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:47.453571   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:47.457326   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:47.457406   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:47.482854   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:47.482921   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:47.482931   51251 cri.go:89] found id: ""
	I1018 17:45:47.482939   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:47.482996   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:47.487182   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:47.490682   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:47.490788   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:47.518326   51251 cri.go:89] found id: ""
	I1018 17:45:47.518348   51251 logs.go:282] 0 containers: []
	W1018 17:45:47.518357   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:47.518364   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:47.518423   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:47.545707   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:47.545729   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:47.545734   51251 cri.go:89] found id: ""
	I1018 17:45:47.545742   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:47.545795   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:47.549377   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:47.552749   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:47.552816   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:47.578086   51251 cri.go:89] found id: ""
	I1018 17:45:47.578108   51251 logs.go:282] 0 containers: []
	W1018 17:45:47.578116   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:47.578122   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:47.578179   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:47.621041   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:47.621110   51251 cri.go:89] found id: ""
	I1018 17:45:47.621124   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:47.621185   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:47.624873   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:47.624982   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:47.651153   51251 cri.go:89] found id: ""
	I1018 17:45:47.651180   51251 logs.go:282] 0 containers: []
	W1018 17:45:47.651189   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:47.651198   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:47.651227   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:47.748488   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:47.748523   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:47.816047   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:47.807483    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.808149    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.809893    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.810874    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.812453    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:47.807483    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.808149    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.809893    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.810874    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.812453    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:47.816068   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:47.816080   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:47.845226   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:47.845251   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:47.898646   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:47.898681   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:47.939440   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:47.939471   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:47.973436   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:47.973499   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:48.008222   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:48.008264   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:48.022115   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:48.022146   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:48.101167   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:48.101270   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:48.133470   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:48.133539   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:50.714735   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:50.728888   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:50.729016   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:50.759926   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:50.759949   51251 cri.go:89] found id: ""
	I1018 17:45:50.759958   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:50.760018   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:50.764094   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:50.764177   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:50.790739   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:50.790770   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:50.790776   51251 cri.go:89] found id: ""
	I1018 17:45:50.790784   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:50.790848   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:50.794745   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:50.798617   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:50.798692   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:50.827817   51251 cri.go:89] found id: ""
	I1018 17:45:50.827854   51251 logs.go:282] 0 containers: []
	W1018 17:45:50.827863   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:50.827870   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:50.827952   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:50.856700   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:50.856719   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:50.856723   51251 cri.go:89] found id: ""
	I1018 17:45:50.856731   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:50.856784   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:50.860815   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:50.864675   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:50.864745   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:50.889856   51251 cri.go:89] found id: ""
	I1018 17:45:50.889881   51251 logs.go:282] 0 containers: []
	W1018 17:45:50.889889   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:50.889896   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:50.889976   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:50.918684   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:50.918708   51251 cri.go:89] found id: ""
	I1018 17:45:50.918716   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:50.918800   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:50.924460   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:50.924531   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:50.951436   51251 cri.go:89] found id: ""
	I1018 17:45:50.951457   51251 logs.go:282] 0 containers: []
	W1018 17:45:50.951465   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:50.951475   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:50.951491   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:50.967914   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:50.967945   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:51.025758   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:51.025791   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:51.076423   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:51.076458   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:51.107878   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:51.107909   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:51.140881   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:51.140910   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:51.218816   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:51.218847   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:51.285410   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:51.278013    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.278510    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.279993    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.280335    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.281812    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:51.278013    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.278510    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.279993    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.280335    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.281812    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:51.285432   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:51.285444   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:51.314747   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:51.314775   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:51.388168   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:51.388242   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:51.424772   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:51.424801   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:54.026323   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:54.037679   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:54.037753   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:54.064502   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:54.064524   51251 cri.go:89] found id: ""
	I1018 17:45:54.064532   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:54.064585   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:54.068305   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:54.068376   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:54.097996   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:54.098018   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:54.098023   51251 cri.go:89] found id: ""
	I1018 17:45:54.098031   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:54.098085   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:54.102024   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:54.105866   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:54.105944   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:54.139891   51251 cri.go:89] found id: ""
	I1018 17:45:54.139915   51251 logs.go:282] 0 containers: []
	W1018 17:45:54.139924   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:54.139931   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:54.139986   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:54.166319   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:54.166343   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:54.166347   51251 cri.go:89] found id: ""
	I1018 17:45:54.166355   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:54.166420   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:54.170521   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:54.174527   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:54.174590   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:54.219178   51251 cri.go:89] found id: ""
	I1018 17:45:54.219212   51251 logs.go:282] 0 containers: []
	W1018 17:45:54.219220   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:54.219227   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:54.219283   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:54.246579   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:54.246602   51251 cri.go:89] found id: ""
	I1018 17:45:54.246610   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:54.246667   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:54.250546   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:54.250651   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:54.282408   51251 cri.go:89] found id: ""
	I1018 17:45:54.282432   51251 logs.go:282] 0 containers: []
	W1018 17:45:54.282440   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:54.282449   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:54.282460   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:54.367430   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:54.348041    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.348865    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.361407    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.362108    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.363737    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:54.348041    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.348865    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.361407    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.362108    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.363737    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:54.367454   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:54.367467   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:54.393831   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:54.393863   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:54.435123   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:54.435155   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:54.491144   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:54.491188   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:54.527193   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:54.527223   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:54.604327   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:54.604369   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:54.636282   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:54.636312   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:54.714664   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:54.714698   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:54.752480   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:54.752508   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:54.858349   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:54.858422   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:57.373300   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:57.384246   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:57.384335   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:57.415506   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:57.415571   51251 cri.go:89] found id: ""
	I1018 17:45:57.415595   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:57.415671   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:57.419389   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:57.419503   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:57.445186   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:57.445206   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:57.445211   51251 cri.go:89] found id: ""
	I1018 17:45:57.445219   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:57.445281   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:57.449004   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:57.452413   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:57.452492   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:57.477864   51251 cri.go:89] found id: ""
	I1018 17:45:57.477888   51251 logs.go:282] 0 containers: []
	W1018 17:45:57.477896   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:57.477903   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:57.477962   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:57.504898   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:57.504920   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:57.504931   51251 cri.go:89] found id: ""
	I1018 17:45:57.504977   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:57.505034   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:57.509061   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:57.513614   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:57.513685   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:57.544310   51251 cri.go:89] found id: ""
	I1018 17:45:57.544332   51251 logs.go:282] 0 containers: []
	W1018 17:45:57.544340   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:57.544346   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:57.544403   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:57.571245   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:57.571266   51251 cri.go:89] found id: ""
	I1018 17:45:57.571274   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:57.571331   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:57.575106   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:57.575176   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:57.606111   51251 cri.go:89] found id: ""
	I1018 17:45:57.606144   51251 logs.go:282] 0 containers: []
	W1018 17:45:57.606154   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:57.606162   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:57.606175   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:57.634184   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:57.634212   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:57.700157   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:57.700193   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:57.740730   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:57.740759   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:57.767473   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:57.767501   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:57.792761   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:57.792788   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:57.872610   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:57.872686   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:57.970465   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:57.970503   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:57.983943   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:57.983969   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:58.065431   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:58.056364    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.057407    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.058182    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.059825    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.060434    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:58.056364    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.057407    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.058182    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.059825    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.060434    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:58.065498   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:58.065512   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:58.140361   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:58.140407   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:00.709339   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:00.720914   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:00.721109   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:00.749016   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:00.749036   51251 cri.go:89] found id: ""
	I1018 17:46:00.749043   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:00.749098   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:00.752785   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:00.752913   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:00.780089   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:00.780157   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:00.780174   51251 cri.go:89] found id: ""
	I1018 17:46:00.780195   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:00.780277   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:00.784027   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:00.787918   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:00.787984   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:00.815886   51251 cri.go:89] found id: ""
	I1018 17:46:00.815911   51251 logs.go:282] 0 containers: []
	W1018 17:46:00.815920   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:00.815927   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:00.815984   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:00.843641   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:00.843672   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:00.843677   51251 cri.go:89] found id: ""
	I1018 17:46:00.843690   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:00.843749   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:00.857213   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:00.861599   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:00.861750   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:00.895883   51251 cri.go:89] found id: ""
	I1018 17:46:00.895957   51251 logs.go:282] 0 containers: []
	W1018 17:46:00.895981   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:00.896000   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:00.896070   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:00.925992   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:00.926061   51251 cri.go:89] found id: ""
	I1018 17:46:00.926086   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:00.926167   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:00.930024   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:00.930108   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:00.958457   51251 cri.go:89] found id: ""
	I1018 17:46:00.958482   51251 logs.go:282] 0 containers: []
	W1018 17:46:00.958490   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:00.958499   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:00.958511   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:01.035152   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:01.035187   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:01.069631   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:01.069662   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:01.099442   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:01.099466   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:01.185919   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:01.185957   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:01.233776   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:01.233801   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:01.247414   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:01.247442   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:01.275612   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:01.275640   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:01.332794   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:01.332829   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:01.367809   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:01.367840   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:01.464892   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:01.464929   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:01.535577   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:01.527773   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.528316   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.530190   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.530564   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.531863   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:01.527773   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.528316   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.530190   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.530564   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.531863   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:04.037058   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:04.047958   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:04.048043   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:04.080745   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:04.080770   51251 cri.go:89] found id: ""
	I1018 17:46:04.080779   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:04.080837   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:04.084749   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:04.084819   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:04.113194   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:04.113268   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:04.113275   51251 cri.go:89] found id: ""
	I1018 17:46:04.113283   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:04.113374   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:04.117058   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:04.121021   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:04.121088   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:04.150209   51251 cri.go:89] found id: ""
	I1018 17:46:04.150233   51251 logs.go:282] 0 containers: []
	W1018 17:46:04.150242   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:04.150248   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:04.150308   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:04.182648   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:04.182719   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:04.182732   51251 cri.go:89] found id: ""
	I1018 17:46:04.182740   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:04.182811   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:04.187068   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:04.191187   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:04.191265   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:04.226123   51251 cri.go:89] found id: ""
	I1018 17:46:04.226147   51251 logs.go:282] 0 containers: []
	W1018 17:46:04.226158   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:04.226165   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:04.226226   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:04.252111   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:04.252132   51251 cri.go:89] found id: ""
	I1018 17:46:04.252141   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:04.252196   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:04.255953   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:04.256026   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:04.287389   51251 cri.go:89] found id: ""
	I1018 17:46:04.287415   51251 logs.go:282] 0 containers: []
	W1018 17:46:04.287423   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:04.287432   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:04.287443   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:04.321947   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:04.321973   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:04.430342   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:04.430376   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:04.442744   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:04.442769   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:04.506948   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:04.498862   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.499448   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.501006   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.501596   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.503108   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:04.498862   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.499448   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.501006   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.501596   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.503108   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:04.507014   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:04.507043   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:04.543328   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:04.543361   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:04.572765   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:04.572798   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:04.602775   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:04.602801   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:04.658777   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:04.658812   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:04.732490   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:04.732537   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:04.759977   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:04.760005   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:07.339053   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:07.349656   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:07.349760   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:07.379978   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:07.380001   51251 cri.go:89] found id: ""
	I1018 17:46:07.380011   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:07.380093   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:07.383927   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:07.384018   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:07.409769   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:07.409800   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:07.409806   51251 cri.go:89] found id: ""
	I1018 17:46:07.409814   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:07.409902   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:07.413658   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:07.416960   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:07.417067   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:07.442892   51251 cri.go:89] found id: ""
	I1018 17:46:07.442916   51251 logs.go:282] 0 containers: []
	W1018 17:46:07.442924   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:07.442930   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:07.442989   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:07.469419   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:07.469440   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:07.469445   51251 cri.go:89] found id: ""
	I1018 17:46:07.469452   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:07.469508   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:07.473607   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:07.477386   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:07.477501   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:07.504080   51251 cri.go:89] found id: ""
	I1018 17:46:07.504105   51251 logs.go:282] 0 containers: []
	W1018 17:46:07.504116   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:07.504122   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:07.504231   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:07.531758   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:07.531781   51251 cri.go:89] found id: ""
	I1018 17:46:07.531790   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:07.531870   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:07.535733   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:07.535830   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:07.564437   51251 cri.go:89] found id: ""
	I1018 17:46:07.564463   51251 logs.go:282] 0 containers: []
	W1018 17:46:07.564471   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:07.564480   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:07.564524   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:07.628243   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:07.628278   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:07.662025   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:07.662052   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:07.764863   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:07.764897   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:07.776837   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:07.776865   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:07.847586   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:07.839604   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.840186   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.841835   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.842344   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.843875   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:07.839604   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.840186   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.841835   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.842344   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.843875   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:07.847606   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:07.847622   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:07.880085   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:07.880117   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:07.963636   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:07.963671   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:07.994194   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:07.994222   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:08.025564   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:08.025595   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:08.108415   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:08.108451   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:10.642798   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:10.653476   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:10.653548   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:10.679376   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:10.679398   51251 cri.go:89] found id: ""
	I1018 17:46:10.679407   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:10.679465   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:10.683355   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:10.683427   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:10.710429   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:10.710450   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:10.710454   51251 cri.go:89] found id: ""
	I1018 17:46:10.710461   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:10.710513   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:10.714130   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:10.717443   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:10.717506   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:10.744042   51251 cri.go:89] found id: ""
	I1018 17:46:10.744064   51251 logs.go:282] 0 containers: []
	W1018 17:46:10.744071   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:10.744078   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:10.744132   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:10.773166   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:10.773191   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:10.773196   51251 cri.go:89] found id: ""
	I1018 17:46:10.773203   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:10.773282   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:10.777442   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:10.781226   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:10.781299   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:10.808886   51251 cri.go:89] found id: ""
	I1018 17:46:10.808909   51251 logs.go:282] 0 containers: []
	W1018 17:46:10.808917   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:10.808924   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:10.809009   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:10.836634   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:10.836656   51251 cri.go:89] found id: ""
	I1018 17:46:10.836664   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:10.836720   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:10.840695   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:10.840772   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:10.869735   51251 cri.go:89] found id: ""
	I1018 17:46:10.869799   51251 logs.go:282] 0 containers: []
	W1018 17:46:10.869812   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:10.869822   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:10.869833   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:10.949626   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:10.949665   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:11.057346   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:11.057383   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:11.139105   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:11.139141   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:11.170764   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:11.170861   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:11.214148   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:11.214173   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:11.245381   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:11.245409   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:11.258609   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:11.258636   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:11.329040   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:11.320826   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.321453   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.322971   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.323467   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.325006   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:11.320826   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.321453   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.322971   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.323467   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.325006   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:11.329060   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:11.329072   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:11.354686   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:11.354710   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:11.393844   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:11.393872   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:13.965067   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:13.977065   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:13.977139   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:14.006565   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:14.006590   51251 cri.go:89] found id: ""
	I1018 17:46:14.006600   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:14.006694   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:14.011312   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:14.011387   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:14.040339   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:14.040367   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:14.040372   51251 cri.go:89] found id: ""
	I1018 17:46:14.040380   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:14.040437   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:14.044065   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:14.047760   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:14.047831   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:14.074918   51251 cri.go:89] found id: ""
	I1018 17:46:14.074943   51251 logs.go:282] 0 containers: []
	W1018 17:46:14.074952   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:14.074960   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:14.075023   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:14.107504   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:14.107526   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:14.107531   51251 cri.go:89] found id: ""
	I1018 17:46:14.107539   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:14.107591   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:14.111227   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:14.114719   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:14.114811   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:14.145967   51251 cri.go:89] found id: ""
	I1018 17:46:14.146042   51251 logs.go:282] 0 containers: []
	W1018 17:46:14.146062   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:14.146082   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:14.146164   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:14.186824   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:14.186888   51251 cri.go:89] found id: ""
	I1018 17:46:14.186910   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:14.186990   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:14.190545   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:14.190628   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:14.226876   51251 cri.go:89] found id: ""
	I1018 17:46:14.226971   51251 logs.go:282] 0 containers: []
	W1018 17:46:14.226994   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:14.227020   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:14.227045   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:14.329164   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:14.329201   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:14.397274   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:14.389270   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.390097   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.391638   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.392076   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.393694   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:14.389270   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.390097   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.391638   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.392076   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.393694   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:14.397296   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:14.397309   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:14.426769   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:14.426796   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:14.486615   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:14.486650   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:14.559349   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:14.559386   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:14.587426   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:14.587455   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:14.664068   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:14.664104   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:14.675861   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:14.675886   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:14.708879   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:14.708911   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:14.736861   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:14.736890   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:17.281896   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:17.292988   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:17.293081   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:17.321611   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:17.321634   51251 cri.go:89] found id: ""
	I1018 17:46:17.321642   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:17.321697   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:17.325317   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:17.325398   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:17.352512   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:17.352534   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:17.352538   51251 cri.go:89] found id: ""
	I1018 17:46:17.352546   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:17.352599   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:17.357098   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:17.360560   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:17.360677   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:17.390732   51251 cri.go:89] found id: ""
	I1018 17:46:17.390762   51251 logs.go:282] 0 containers: []
	W1018 17:46:17.390770   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:17.390778   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:17.390842   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:17.419824   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:17.419846   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:17.419851   51251 cri.go:89] found id: ""
	I1018 17:46:17.419858   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:17.419916   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:17.423710   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:17.427116   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:17.427185   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:17.453579   51251 cri.go:89] found id: ""
	I1018 17:46:17.453602   51251 logs.go:282] 0 containers: []
	W1018 17:46:17.453610   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:17.453617   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:17.453705   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:17.486285   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:17.486309   51251 cri.go:89] found id: ""
	I1018 17:46:17.486318   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:17.486372   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:17.490015   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:17.490104   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:17.518259   51251 cri.go:89] found id: ""
	I1018 17:46:17.518284   51251 logs.go:282] 0 containers: []
	W1018 17:46:17.518292   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:17.518301   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:17.518332   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:17.614000   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:17.614035   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:17.626518   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:17.626553   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:17.684157   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:17.684191   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:17.730343   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:17.730369   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:17.798308   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:17.789990   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.790724   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.792367   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.792674   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.794211   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:17.789990   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.790724   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.792367   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.792674   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.794211   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:17.798326   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:17.798338   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:17.823833   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:17.823857   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:17.865773   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:17.865799   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:17.935865   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:17.935900   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:17.978061   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:17.978088   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:18.006175   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:18.006205   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:20.594229   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:20.605152   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:20.605223   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:20.633212   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:20.633234   51251 cri.go:89] found id: ""
	I1018 17:46:20.633243   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:20.633310   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:20.637046   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:20.637118   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:20.663217   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:20.663238   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:20.663246   51251 cri.go:89] found id: ""
	I1018 17:46:20.663253   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:20.663325   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:20.667226   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:20.670621   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:20.670719   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:20.698213   51251 cri.go:89] found id: ""
	I1018 17:46:20.698235   51251 logs.go:282] 0 containers: []
	W1018 17:46:20.698244   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:20.698287   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:20.698367   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:20.730404   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:20.730434   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:20.730439   51251 cri.go:89] found id: ""
	I1018 17:46:20.730447   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:20.730519   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:20.734442   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:20.738131   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:20.738222   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:20.773079   51251 cri.go:89] found id: ""
	I1018 17:46:20.773149   51251 logs.go:282] 0 containers: []
	W1018 17:46:20.773171   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:20.773193   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:20.773277   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:20.800462   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:20.800534   51251 cri.go:89] found id: ""
	I1018 17:46:20.800569   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:20.800664   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:20.805115   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:20.805213   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:20.830418   51251 cri.go:89] found id: ""
	I1018 17:46:20.830442   51251 logs.go:282] 0 containers: []
	W1018 17:46:20.830451   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:20.830459   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:20.830470   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:20.912043   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:20.912075   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:20.938545   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:20.938572   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:20.977936   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:20.978010   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:21.013920   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:21.013950   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:21.119416   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:21.119450   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:21.132924   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:21.133048   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:21.220628   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:21.211038   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.212205   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.213238   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.213888   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.215798   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:21.211038   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.212205   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.213238   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.213888   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.215798   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:21.220657   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:21.220677   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:21.249593   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:21.249618   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:21.329125   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:21.329162   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:21.387066   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:21.387097   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:23.926900   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:23.937764   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:23.937832   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:23.976069   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:23.976129   51251 cri.go:89] found id: ""
	I1018 17:46:23.976159   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:23.976235   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:23.979863   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:23.979943   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:24.009930   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:24.009950   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:24.009954   51251 cri.go:89] found id: ""
	I1018 17:46:24.009963   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:24.010025   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:24.014274   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:24.018246   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:24.018317   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:24.046546   51251 cri.go:89] found id: ""
	I1018 17:46:24.046571   51251 logs.go:282] 0 containers: []
	W1018 17:46:24.046589   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:24.046596   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:24.046659   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:24.073391   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:24.073411   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:24.073416   51251 cri.go:89] found id: ""
	I1018 17:46:24.073428   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:24.073485   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:24.077447   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:24.081009   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:24.081083   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:24.108804   51251 cri.go:89] found id: ""
	I1018 17:46:24.108828   51251 logs.go:282] 0 containers: []
	W1018 17:46:24.108837   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:24.108843   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:24.108905   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:24.144321   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:24.144348   51251 cri.go:89] found id: ""
	I1018 17:46:24.144357   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:24.144413   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:24.148488   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:24.148592   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:24.176586   51251 cri.go:89] found id: ""
	I1018 17:46:24.176611   51251 logs.go:282] 0 containers: []
	W1018 17:46:24.176619   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:24.176629   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:24.176640   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:24.254257   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:24.245066   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.246406   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.248217   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.248923   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.250447   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:24.245066   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.246406   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.248217   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.248923   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.250447   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:24.254278   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:24.254290   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:24.281646   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:24.281673   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:24.354939   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:24.354974   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:24.383116   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:24.383140   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:24.462892   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:24.462927   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:24.504197   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:24.504228   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:24.562928   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:24.562961   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:24.599399   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:24.599433   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:24.631679   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:24.631746   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:24.732308   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:24.732344   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:27.244674   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:27.255895   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:27.256012   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:27.287040   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:27.287060   51251 cri.go:89] found id: ""
	I1018 17:46:27.287069   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:27.287149   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:27.290894   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:27.290963   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:27.320255   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:27.320275   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:27.320280   51251 cri.go:89] found id: ""
	I1018 17:46:27.320287   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:27.320342   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:27.323980   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:27.327547   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:27.327617   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:27.352735   51251 cri.go:89] found id: ""
	I1018 17:46:27.352759   51251 logs.go:282] 0 containers: []
	W1018 17:46:27.352768   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:27.352774   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:27.352857   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:27.379505   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:27.379527   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:27.379532   51251 cri.go:89] found id: ""
	I1018 17:46:27.379539   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:27.379595   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:27.383294   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:27.386911   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:27.386986   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:27.415912   51251 cri.go:89] found id: ""
	I1018 17:46:27.415934   51251 logs.go:282] 0 containers: []
	W1018 17:46:27.415943   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:27.415949   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:27.416005   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:27.445650   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:27.445672   51251 cri.go:89] found id: ""
	I1018 17:46:27.445682   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:27.445741   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:27.449604   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:27.449704   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:27.484794   51251 cri.go:89] found id: ""
	I1018 17:46:27.484859   51251 logs.go:282] 0 containers: []
	W1018 17:46:27.484882   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:27.484904   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:27.484958   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:27.584293   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:27.584332   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:27.648407   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:27.648440   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:27.676738   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:27.676766   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:27.689349   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:27.689383   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:27.762040   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:27.753582   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.754358   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.756209   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.756792   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.758400   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:27.753582   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.754358   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.756209   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.756792   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.758400   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:27.762060   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:27.762074   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:27.788162   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:27.788190   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:27.822151   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:27.822180   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:27.891958   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:27.891993   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:27.920389   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:27.920413   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:28.000828   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:28.000902   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:30.539090   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:30.549624   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:30.549693   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:30.576191   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:30.576210   51251 cri.go:89] found id: ""
	I1018 17:46:30.576218   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:30.576270   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:30.580032   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:30.580143   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:30.605554   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:30.605576   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:30.605582   51251 cri.go:89] found id: ""
	I1018 17:46:30.605600   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:30.605693   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:30.609432   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:30.613226   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:30.613297   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:30.640206   51251 cri.go:89] found id: ""
	I1018 17:46:30.640232   51251 logs.go:282] 0 containers: []
	W1018 17:46:30.640241   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:30.640248   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:30.640305   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:30.667995   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:30.668022   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:30.668027   51251 cri.go:89] found id: ""
	I1018 17:46:30.668035   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:30.668090   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:30.671800   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:30.675538   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:30.675607   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:30.700530   51251 cri.go:89] found id: ""
	I1018 17:46:30.700554   51251 logs.go:282] 0 containers: []
	W1018 17:46:30.700562   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:30.700568   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:30.700623   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:30.728589   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:30.728610   51251 cri.go:89] found id: ""
	I1018 17:46:30.728618   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:30.728673   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:30.732322   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:30.732414   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:30.757553   51251 cri.go:89] found id: ""
	I1018 17:46:30.757577   51251 logs.go:282] 0 containers: []
	W1018 17:46:30.757586   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:30.757594   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:30.757635   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:30.823888   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:30.816309   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.816862   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.818339   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.818806   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.820240   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:30.816309   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.816862   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.818339   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.818806   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.820240   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:30.823908   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:30.823921   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:30.849213   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:30.849239   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:30.906353   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:30.906387   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:30.995137   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:30.995173   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:31.081727   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:31.081761   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:31.125969   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:31.125994   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:31.232441   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:31.232474   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:31.244403   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:31.244430   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:31.288661   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:31.288704   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:31.322411   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:31.322439   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:33.853119   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:33.864167   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:33.864236   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:33.897397   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:33.897420   51251 cri.go:89] found id: ""
	I1018 17:46:33.897428   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:33.897485   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:33.901240   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:33.901310   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:33.929613   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:33.929646   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:33.929651   51251 cri.go:89] found id: ""
	I1018 17:46:33.929658   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:33.929735   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:33.933312   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:33.936856   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:33.936964   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:33.977530   51251 cri.go:89] found id: ""
	I1018 17:46:33.977558   51251 logs.go:282] 0 containers: []
	W1018 17:46:33.977566   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:33.977573   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:33.977631   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:34.012562   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:34.012584   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:34.012589   51251 cri.go:89] found id: ""
	I1018 17:46:34.012596   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:34.012656   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:34.016474   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:34.020781   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:34.020852   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:34.046987   51251 cri.go:89] found id: ""
	I1018 17:46:34.047014   51251 logs.go:282] 0 containers: []
	W1018 17:46:34.047022   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:34.047029   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:34.047086   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:34.076543   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:34.076564   51251 cri.go:89] found id: ""
	I1018 17:46:34.076575   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:34.076631   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:34.080378   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:34.080449   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:34.107694   51251 cri.go:89] found id: ""
	I1018 17:46:34.107716   51251 logs.go:282] 0 containers: []
	W1018 17:46:34.107724   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:34.107734   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:34.107745   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:34.119659   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:34.119686   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:34.177728   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:34.177831   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:34.238468   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:34.238509   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:34.321582   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:34.321620   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:34.353750   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:34.353776   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:34.384525   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:34.384552   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:34.462817   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:34.462849   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:34.494982   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:34.495010   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:34.598168   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:34.598203   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:34.675787   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:34.666968   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.667733   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.669584   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.670213   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.671781   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:34.666968   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.667733   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.669584   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.670213   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.671781   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:34.675809   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:34.675822   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:37.204073   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:37.217257   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:37.217324   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:37.242870   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:37.242892   51251 cri.go:89] found id: ""
	I1018 17:46:37.242900   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:37.242956   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:37.246583   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:37.246652   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:37.272095   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:37.272157   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:37.272174   51251 cri.go:89] found id: ""
	I1018 17:46:37.272195   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:37.272279   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:37.276536   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:37.280121   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:37.280190   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:37.305151   51251 cri.go:89] found id: ""
	I1018 17:46:37.305173   51251 logs.go:282] 0 containers: []
	W1018 17:46:37.305182   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:37.305188   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:37.305244   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:37.338068   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:37.338137   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:37.338155   51251 cri.go:89] found id: ""
	I1018 17:46:37.338191   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:37.338263   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:37.342725   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:37.346547   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:37.346621   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:37.374074   51251 cri.go:89] found id: ""
	I1018 17:46:37.374095   51251 logs.go:282] 0 containers: []
	W1018 17:46:37.374104   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:37.374110   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:37.374167   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:37.405324   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:37.405346   51251 cri.go:89] found id: ""
	I1018 17:46:37.405360   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:37.405434   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:37.409814   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:37.409899   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:37.435527   51251 cri.go:89] found id: ""
	I1018 17:46:37.435551   51251 logs.go:282] 0 containers: []
	W1018 17:46:37.435560   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:37.435568   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:37.435579   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:37.504448   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:37.496518   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.497134   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.498616   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.499058   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.500376   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:37.496518   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.497134   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.498616   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.499058   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.500376   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:37.504468   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:37.504482   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:37.533375   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:37.533403   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:37.598625   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:37.598661   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:37.634535   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:37.634563   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:37.717277   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:37.717311   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:37.818978   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:37.819016   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:37.832055   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:37.832084   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:37.904377   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:37.904408   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:37.938939   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:37.938966   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:37.981000   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:37.981027   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:40.513454   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:40.524358   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:40.524437   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:40.552377   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:40.552454   51251 cri.go:89] found id: ""
	I1018 17:46:40.552475   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:40.552563   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:40.556445   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:40.556565   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:40.582695   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:40.582726   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:40.582732   51251 cri.go:89] found id: ""
	I1018 17:46:40.582739   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:40.582814   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:40.586779   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:40.590379   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:40.590449   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:40.618010   51251 cri.go:89] found id: ""
	I1018 17:46:40.618034   51251 logs.go:282] 0 containers: []
	W1018 17:46:40.618050   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:40.618056   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:40.618113   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:40.648753   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:40.648776   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:40.648782   51251 cri.go:89] found id: ""
	I1018 17:46:40.648790   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:40.648848   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:40.652681   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:40.656399   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:40.656475   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:40.682133   51251 cri.go:89] found id: ""
	I1018 17:46:40.682157   51251 logs.go:282] 0 containers: []
	W1018 17:46:40.682165   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:40.682180   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:40.682236   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:40.709218   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:40.709242   51251 cri.go:89] found id: ""
	I1018 17:46:40.709250   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:40.709309   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:40.713679   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:40.713762   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:40.739858   51251 cri.go:89] found id: ""
	I1018 17:46:40.739881   51251 logs.go:282] 0 containers: []
	W1018 17:46:40.739889   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:40.739899   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:40.739910   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:40.767013   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:40.767039   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:40.815169   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:40.815198   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:40.828097   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:40.828174   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:40.854852   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:40.854880   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:40.928587   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:40.928623   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:40.967185   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:40.967264   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:41.043445   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:41.043480   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:41.073682   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:41.073706   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:41.167926   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:41.167960   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:41.279975   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:41.280011   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:41.354826   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:41.337935   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.339488   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.340251   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.347202   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.347805   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:41.337935   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.339488   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.340251   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.347202   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.347805   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:43.856192   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:43.867961   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:43.868072   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:43.894221   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:43.894243   51251 cri.go:89] found id: ""
	I1018 17:46:43.894252   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:43.894332   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:43.898170   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:43.898263   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:43.925956   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:43.926031   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:43.926050   51251 cri.go:89] found id: ""
	I1018 17:46:43.926070   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:43.926142   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:43.929746   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:43.933185   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:43.933255   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:43.959602   51251 cri.go:89] found id: ""
	I1018 17:46:43.959627   51251 logs.go:282] 0 containers: []
	W1018 17:46:43.959635   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:43.959647   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:43.959704   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:43.991256   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:43.991325   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:43.991354   51251 cri.go:89] found id: ""
	I1018 17:46:43.991375   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:43.991457   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:43.995372   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:43.999083   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:43.999191   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:44.027597   51251 cri.go:89] found id: ""
	I1018 17:46:44.027632   51251 logs.go:282] 0 containers: []
	W1018 17:46:44.027641   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:44.027647   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:44.027715   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:44.055061   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:44.055085   51251 cri.go:89] found id: ""
	I1018 17:46:44.055094   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:44.055163   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:44.059234   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:44.059339   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:44.087631   51251 cri.go:89] found id: ""
	I1018 17:46:44.087653   51251 logs.go:282] 0 containers: []
	W1018 17:46:44.087661   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:44.087670   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:44.087681   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:44.189442   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:44.189477   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:44.218935   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:44.218961   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:44.286708   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:44.286746   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:44.321434   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:44.321463   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:44.399455   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:44.399492   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:44.434475   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:44.434502   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:44.448230   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:44.448256   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:44.523028   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:44.515201   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.515969   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.517455   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.517964   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.519503   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:44.515201   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.515969   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.517455   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.517964   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.519503   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:44.523047   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:44.523060   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:44.559772   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:44.559799   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:44.632864   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:44.632968   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:47.163147   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:47.174684   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:47.174753   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:47.212548   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:47.212575   51251 cri.go:89] found id: ""
	I1018 17:46:47.212583   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:47.212638   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:47.216970   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:47.217043   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:47.246472   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:47.246547   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:47.246565   51251 cri.go:89] found id: ""
	I1018 17:46:47.246585   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:47.246669   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:47.252448   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:47.255988   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:47.256113   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:47.287109   51251 cri.go:89] found id: ""
	I1018 17:46:47.287134   51251 logs.go:282] 0 containers: []
	W1018 17:46:47.287144   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:47.287150   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:47.287211   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:47.316914   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:47.316964   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:47.316969   51251 cri.go:89] found id: ""
	I1018 17:46:47.316977   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:47.317032   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:47.320849   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:47.324385   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:47.324455   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:47.351869   51251 cri.go:89] found id: ""
	I1018 17:46:47.351894   51251 logs.go:282] 0 containers: []
	W1018 17:46:47.351902   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:47.351908   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:47.351963   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:47.378692   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:47.378712   51251 cri.go:89] found id: ""
	I1018 17:46:47.378720   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:47.378773   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:47.382267   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:47.382341   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:47.417848   51251 cri.go:89] found id: ""
	I1018 17:46:47.417914   51251 logs.go:282] 0 containers: []
	W1018 17:46:47.417928   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:47.417938   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:47.417953   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:47.515489   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:47.515527   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:47.598137   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:47.585088   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.586210   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.586811   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.592142   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.592951   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:47.585088   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.586210   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.586811   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.592142   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.592951   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:47.598159   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:47.598172   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:47.627147   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:47.627171   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:47.685715   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:47.685749   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:47.729509   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:47.729542   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:47.802620   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:47.802658   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:47.841366   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:47.841393   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:47.853500   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:47.853528   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:47.882085   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:47.882112   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:47.962102   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:47.962182   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:50.497378   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:50.509438   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:50.509515   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:50.536827   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:50.536845   51251 cri.go:89] found id: ""
	I1018 17:46:50.536853   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:50.536906   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:50.540656   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:50.540736   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:50.572295   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:50.572315   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:50.572319   51251 cri.go:89] found id: ""
	I1018 17:46:50.572326   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:50.572381   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:50.576114   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:50.579678   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:50.579767   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:50.604801   51251 cri.go:89] found id: ""
	I1018 17:46:50.604883   51251 logs.go:282] 0 containers: []
	W1018 17:46:50.604907   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:50.604953   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:50.605039   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:50.630628   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:50.630689   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:50.630709   51251 cri.go:89] found id: ""
	I1018 17:46:50.630731   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:50.630799   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:50.634652   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:50.638142   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:50.638211   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:50.668081   51251 cri.go:89] found id: ""
	I1018 17:46:50.668158   51251 logs.go:282] 0 containers: []
	W1018 17:46:50.668178   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:50.668199   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:50.668286   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:50.695569   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:50.695633   51251 cri.go:89] found id: ""
	I1018 17:46:50.695655   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:50.695739   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:50.699470   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:50.699542   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:50.727412   51251 cri.go:89] found id: ""
	I1018 17:46:50.727436   51251 logs.go:282] 0 containers: []
	W1018 17:46:50.727445   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:50.727454   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:50.727467   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:50.753408   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:50.753435   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:50.827768   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:50.827848   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:50.859978   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:50.860003   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:50.939527   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:50.939561   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:50.980682   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:50.980711   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:51.076628   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:51.076663   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:51.090191   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:51.090220   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:51.182260   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:51.173917   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.174843   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.176369   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.176776   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.178414   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:51.173917   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.174843   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.176369   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.176776   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.178414   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:51.182283   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:51.182295   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:51.232720   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:51.232749   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:51.308144   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:51.308178   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:53.837977   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:53.848545   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:53.848614   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:53.876495   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:53.876519   51251 cri.go:89] found id: ""
	I1018 17:46:53.876528   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:53.876595   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:53.880322   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:53.880394   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:53.907168   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:53.907231   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:53.907249   51251 cri.go:89] found id: ""
	I1018 17:46:53.907272   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:53.907357   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:53.911597   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:53.914987   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:53.915059   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:53.940518   51251 cri.go:89] found id: ""
	I1018 17:46:53.940542   51251 logs.go:282] 0 containers: []
	W1018 17:46:53.940551   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:53.940557   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:53.940616   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:53.978433   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:53.978457   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:53.978462   51251 cri.go:89] found id: ""
	I1018 17:46:53.978469   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:53.978524   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:53.982381   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:53.985948   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:53.986022   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:54.015365   51251 cri.go:89] found id: ""
	I1018 17:46:54.015389   51251 logs.go:282] 0 containers: []
	W1018 17:46:54.015403   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:54.015410   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:54.015469   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:54.043566   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:54.043585   51251 cri.go:89] found id: ""
	I1018 17:46:54.043594   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:54.043652   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:54.047469   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:54.047537   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:54.074756   51251 cri.go:89] found id: ""
	I1018 17:46:54.074779   51251 logs.go:282] 0 containers: []
	W1018 17:46:54.074788   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:54.074797   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:54.074836   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:54.105299   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:54.105329   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:54.181466   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:54.181501   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:54.274419   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:54.274455   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:54.312879   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:54.312907   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:54.417669   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:54.417744   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:54.429755   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:54.429780   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:54.498834   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:54.489425   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.491045   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.492004   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.493115   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.494863   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:54.489425   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.491045   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.492004   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.493115   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.494863   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:54.498906   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:54.498927   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:54.527210   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:54.527238   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:54.569700   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:54.569732   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:54.644529   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:54.644561   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:57.172362   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:57.183486   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:57.183556   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:57.221818   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:57.221836   51251 cri.go:89] found id: ""
	I1018 17:46:57.221844   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:57.221899   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:57.225454   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:57.225520   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:57.252169   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:57.252192   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:57.252197   51251 cri.go:89] found id: ""
	I1018 17:46:57.252206   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:57.252263   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:57.256351   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:57.259722   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:57.259804   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:57.286504   51251 cri.go:89] found id: ""
	I1018 17:46:57.286527   51251 logs.go:282] 0 containers: []
	W1018 17:46:57.286536   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:57.286542   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:57.286603   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:57.314232   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:57.314254   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:57.314259   51251 cri.go:89] found id: ""
	I1018 17:46:57.314267   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:57.314322   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:57.317847   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:57.320999   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:57.321074   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:57.346974   51251 cri.go:89] found id: ""
	I1018 17:46:57.346999   51251 logs.go:282] 0 containers: []
	W1018 17:46:57.347008   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:57.347014   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:57.347069   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:57.373499   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:57.373567   51251 cri.go:89] found id: ""
	I1018 17:46:57.373587   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:57.373664   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:57.377584   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:57.377703   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:57.407749   51251 cri.go:89] found id: ""
	I1018 17:46:57.407773   51251 logs.go:282] 0 containers: []
	W1018 17:46:57.407782   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:57.407790   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:57.407801   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:57.420407   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:57.420432   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:57.450356   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:57.450384   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:57.487363   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:57.487394   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:57.580373   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:57.580410   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:57.617494   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:57.617524   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:57.719190   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:57.719227   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:57.790068   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:57.780054   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.780444   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.782856   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.783240   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.785433   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:57.780054   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.780444   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.782856   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.783240   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.785433   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:57.790090   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:57.790104   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:57.849803   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:57.849835   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:57.881569   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:57.881600   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:57.911940   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:57.911966   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:00.495334   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:00.507616   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:00.507694   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:00.539238   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:00.539258   51251 cri.go:89] found id: ""
	I1018 17:47:00.539266   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:00.539323   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:00.543503   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:00.543571   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:00.574079   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:00.574112   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:00.574118   51251 cri.go:89] found id: ""
	I1018 17:47:00.574126   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:00.574199   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:00.578461   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:00.582394   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:00.582473   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:00.609898   51251 cri.go:89] found id: ""
	I1018 17:47:00.609973   51251 logs.go:282] 0 containers: []
	W1018 17:47:00.610004   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:00.610017   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:00.610086   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:00.637367   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:00.637388   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:00.637393   51251 cri.go:89] found id: ""
	I1018 17:47:00.637400   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:00.637464   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:00.641319   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:00.644789   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:00.644895   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:00.672435   51251 cri.go:89] found id: ""
	I1018 17:47:00.672467   51251 logs.go:282] 0 containers: []
	W1018 17:47:00.672476   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:00.672498   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:00.672580   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:00.699455   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:00.699483   51251 cri.go:89] found id: ""
	I1018 17:47:00.699492   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:00.699583   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:00.703264   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:00.703360   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:00.728880   51251 cri.go:89] found id: ""
	I1018 17:47:00.728902   51251 logs.go:282] 0 containers: []
	W1018 17:47:00.728909   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:00.728919   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:00.728930   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:00.823491   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:00.823527   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:00.902015   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:00.902048   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:00.934461   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:00.934491   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:00.946667   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:00.946693   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:01.028399   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:01.020279   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.020921   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.022494   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.023037   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.024610   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:01.020279   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.020921   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.022494   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.023037   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.024610   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:01.028462   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:01.028491   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:01.054806   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:01.054833   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:01.113787   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:01.113863   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:01.158354   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:01.158386   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:01.240342   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:01.240377   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:01.271277   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:01.271308   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:03.801529   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:03.812492   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:03.812565   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:03.840023   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:03.840046   51251 cri.go:89] found id: ""
	I1018 17:47:03.840054   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:03.840107   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:03.844123   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:03.844199   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:03.871286   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:03.871312   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:03.871317   51251 cri.go:89] found id: ""
	I1018 17:47:03.871325   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:03.871393   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:03.875415   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:03.879340   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:03.879454   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:03.907561   51251 cri.go:89] found id: ""
	I1018 17:47:03.907586   51251 logs.go:282] 0 containers: []
	W1018 17:47:03.907595   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:03.907602   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:03.907685   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:03.933344   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:03.933418   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:03.933445   51251 cri.go:89] found id: ""
	I1018 17:47:03.933467   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:03.933532   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:03.937202   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:03.940624   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:03.940692   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:03.976333   51251 cri.go:89] found id: ""
	I1018 17:47:03.976360   51251 logs.go:282] 0 containers: []
	W1018 17:47:03.976369   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:03.976375   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:03.976431   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:04.003969   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:04.003993   51251 cri.go:89] found id: ""
	I1018 17:47:04.004002   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:04.004073   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:04.008851   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:04.008931   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:04.043815   51251 cri.go:89] found id: ""
	I1018 17:47:04.043837   51251 logs.go:282] 0 containers: []
	W1018 17:47:04.043845   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:04.043854   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:04.043866   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:04.103935   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:04.103972   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:04.197102   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:04.197140   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:04.232873   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:04.232903   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:04.308823   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:04.308859   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:04.340563   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:04.340591   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:04.411725   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:04.402979   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.403733   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.405382   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.405957   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.407619   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:04.402979   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.403733   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.405382   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.405957   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.407619   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:04.411746   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:04.411758   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:04.436986   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:04.437017   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:04.474563   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:04.474599   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:04.508182   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:04.508207   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:04.612203   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:04.612245   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:07.124391   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:07.136931   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:07.137030   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:07.162931   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:07.162951   51251 cri.go:89] found id: ""
	I1018 17:47:07.162960   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:07.163014   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:07.166802   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:07.166873   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:07.194647   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:07.194666   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:07.194671   51251 cri.go:89] found id: ""
	I1018 17:47:07.194679   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:07.194732   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:07.198306   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:07.202321   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:07.202393   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:07.236779   51251 cri.go:89] found id: ""
	I1018 17:47:07.236804   51251 logs.go:282] 0 containers: []
	W1018 17:47:07.236813   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:07.236819   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:07.236876   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:07.266781   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:07.266801   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:07.266806   51251 cri.go:89] found id: ""
	I1018 17:47:07.266813   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:07.266867   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:07.270559   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:07.275186   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:07.275286   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:07.304386   51251 cri.go:89] found id: ""
	I1018 17:47:07.304423   51251 logs.go:282] 0 containers: []
	W1018 17:47:07.304454   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:07.304462   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:07.304540   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:07.333196   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:07.333220   51251 cri.go:89] found id: ""
	I1018 17:47:07.333228   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:07.333322   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:07.338348   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:07.338462   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:07.366271   51251 cri.go:89] found id: ""
	I1018 17:47:07.366343   51251 logs.go:282] 0 containers: []
	W1018 17:47:07.366364   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:07.366379   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:07.366391   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:07.468507   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:07.468585   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:07.529687   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:07.529725   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:07.565649   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:07.565779   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:07.596211   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:07.596237   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:07.615230   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:07.615299   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:07.692829   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:07.685395   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.685775   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.687235   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.687549   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.689030   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:07.685395   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.685775   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.687235   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.687549   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.689030   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:07.692899   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:07.692930   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:07.718952   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:07.719025   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:07.795561   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:07.795598   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:07.824250   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:07.824280   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:07.906836   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:07.906868   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:10.439981   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:10.451479   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:10.451545   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:10.480101   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:10.480123   51251 cri.go:89] found id: ""
	I1018 17:47:10.480132   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:10.480190   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:10.483904   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:10.484019   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:10.514873   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:10.514897   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:10.514902   51251 cri.go:89] found id: ""
	I1018 17:47:10.514910   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:10.514966   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:10.518574   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:10.522267   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:10.522379   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:10.550236   51251 cri.go:89] found id: ""
	I1018 17:47:10.550300   51251 logs.go:282] 0 containers: []
	W1018 17:47:10.550324   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:10.550343   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:10.550419   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:10.576542   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:10.576564   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:10.576569   51251 cri.go:89] found id: ""
	I1018 17:47:10.576576   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:10.576631   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:10.580343   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:10.583810   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:10.583876   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:10.608923   51251 cri.go:89] found id: ""
	I1018 17:47:10.608997   51251 logs.go:282] 0 containers: []
	W1018 17:47:10.609009   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:10.609016   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:10.609083   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:10.640901   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:10.640997   51251 cri.go:89] found id: ""
	I1018 17:47:10.641019   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:10.641104   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:10.644777   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:10.644898   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:10.686801   51251 cri.go:89] found id: ""
	I1018 17:47:10.686867   51251 logs.go:282] 0 containers: []
	W1018 17:47:10.686888   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:10.686902   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:10.686913   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:10.790476   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:10.790513   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:10.866774   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:10.866808   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:10.896066   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:10.896092   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:10.977137   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:10.977170   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:11.028633   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:11.028664   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:11.040841   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:11.040870   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:11.108732   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:11.100472   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.101171   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.102909   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.103502   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.105204   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:11.100472   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.101171   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.102909   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.103502   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.105204   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:11.108754   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:11.108767   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:11.142956   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:11.142982   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:11.203085   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:11.203120   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:11.245548   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:11.245582   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:13.780727   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:13.792098   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:13.792166   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:13.819543   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:13.819564   51251 cri.go:89] found id: ""
	I1018 17:47:13.819571   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:13.819627   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:13.823882   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:13.823951   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:13.849465   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:13.849495   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:13.849501   51251 cri.go:89] found id: ""
	I1018 17:47:13.849508   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:13.849563   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:13.853400   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:13.856833   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:13.856907   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:13.886459   51251 cri.go:89] found id: ""
	I1018 17:47:13.886482   51251 logs.go:282] 0 containers: []
	W1018 17:47:13.886502   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:13.886509   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:13.886576   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:13.914771   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:13.914840   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:13.914859   51251 cri.go:89] found id: ""
	I1018 17:47:13.914884   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:13.914961   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:13.919618   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:13.923284   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:13.923358   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:13.970811   51251 cri.go:89] found id: ""
	I1018 17:47:13.970833   51251 logs.go:282] 0 containers: []
	W1018 17:47:13.970841   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:13.970848   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:13.970905   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:13.997307   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:13.997333   51251 cri.go:89] found id: ""
	I1018 17:47:13.997341   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:13.997406   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:14.001258   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:14.001421   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:14.031834   51251 cri.go:89] found id: ""
	I1018 17:47:14.031908   51251 logs.go:282] 0 containers: []
	W1018 17:47:14.031930   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:14.031952   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:14.031991   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:14.115427   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:14.115472   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:14.155640   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:14.155675   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:14.260678   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:14.260712   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:14.299224   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:14.299256   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:14.328160   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:14.328189   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:14.402362   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:14.402396   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:14.436253   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:14.436279   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:14.448030   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:14.448054   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:14.523971   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:14.516092   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.516475   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.517978   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.518298   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.519757   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:14.516092   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.516475   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.517978   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.518298   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.519757   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:14.523992   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:14.524003   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:14.553496   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:14.553520   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:17.135556   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:17.147008   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:17.147074   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:17.173389   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:17.173409   51251 cri.go:89] found id: ""
	I1018 17:47:17.173417   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:17.173471   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:17.177579   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:17.177651   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:17.203627   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:17.203645   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:17.203650   51251 cri.go:89] found id: ""
	I1018 17:47:17.203657   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:17.203710   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:17.207344   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:17.217855   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:17.217930   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:17.249063   51251 cri.go:89] found id: ""
	I1018 17:47:17.249089   51251 logs.go:282] 0 containers: []
	W1018 17:47:17.249098   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:17.249105   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:17.249168   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:17.277163   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:17.277181   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:17.277186   51251 cri.go:89] found id: ""
	I1018 17:47:17.277193   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:17.277248   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:17.282612   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:17.286495   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:17.286569   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:17.319307   51251 cri.go:89] found id: ""
	I1018 17:47:17.319375   51251 logs.go:282] 0 containers: []
	W1018 17:47:17.319398   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:17.319410   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:17.319486   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:17.346484   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:17.346554   51251 cri.go:89] found id: ""
	I1018 17:47:17.346580   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:17.346657   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:17.350475   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:17.350550   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:17.377839   51251 cri.go:89] found id: ""
	I1018 17:47:17.377902   51251 logs.go:282] 0 containers: []
	W1018 17:47:17.377922   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:17.377931   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:17.377943   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:17.404392   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:17.404417   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:17.465336   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:17.465374   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:17.544540   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:17.544575   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:17.578410   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:17.578440   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:17.622849   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:17.622874   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:17.651286   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:17.651315   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:17.729896   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:17.729933   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:17.762097   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:17.762131   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:17.860291   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:17.860324   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:17.873306   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:17.873333   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:17.956831   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:17.948399   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.948817   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.950652   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.951205   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.953012   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:17.948399   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.948817   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.950652   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.951205   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.953012   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:20.457766   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:20.468306   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:20.468375   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:20.502498   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:20.502519   51251 cri.go:89] found id: ""
	I1018 17:47:20.502527   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:20.502581   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:20.506455   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:20.506526   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:20.533813   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:20.533831   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:20.533836   51251 cri.go:89] found id: ""
	I1018 17:47:20.533844   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:20.533897   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:20.537754   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:20.541481   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:20.541549   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:20.567040   51251 cri.go:89] found id: ""
	I1018 17:47:20.567063   51251 logs.go:282] 0 containers: []
	W1018 17:47:20.567071   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:20.567078   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:20.567139   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:20.596640   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:20.596661   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:20.596666   51251 cri.go:89] found id: ""
	I1018 17:47:20.596674   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:20.596729   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:20.600667   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:20.604504   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:20.604571   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:20.636801   51251 cri.go:89] found id: ""
	I1018 17:47:20.636826   51251 logs.go:282] 0 containers: []
	W1018 17:47:20.636835   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:20.636841   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:20.636919   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:20.663088   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:20.663107   51251 cri.go:89] found id: ""
	I1018 17:47:20.663120   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:20.663175   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:20.666758   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:20.666830   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:20.693183   51251 cri.go:89] found id: ""
	I1018 17:47:20.693205   51251 logs.go:282] 0 containers: []
	W1018 17:47:20.693214   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:20.693223   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:20.693233   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:20.759707   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:20.751450   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.752024   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.753590   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.754259   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.755733   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:20.751450   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.752024   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.753590   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.754259   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.755733   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:20.759728   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:20.759743   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:20.820356   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:20.820393   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:20.855109   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:20.855142   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:20.933430   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:20.933470   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:20.961931   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:20.961959   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:21.002517   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:21.002558   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:21.019433   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:21.019511   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:21.047420   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:21.047495   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:21.079819   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:21.079893   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:21.155722   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:21.155759   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:23.766139   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:23.777085   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:23.777151   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:23.811684   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:23.811707   51251 cri.go:89] found id: ""
	I1018 17:47:23.811715   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:23.811770   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:23.817453   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:23.817525   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:23.844121   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:23.844141   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:23.844146   51251 cri.go:89] found id: ""
	I1018 17:47:23.844153   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:23.844213   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:23.847866   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:23.851438   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:23.851510   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:23.879002   51251 cri.go:89] found id: ""
	I1018 17:47:23.879067   51251 logs.go:282] 0 containers: []
	W1018 17:47:23.879082   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:23.879089   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:23.879148   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:23.905700   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:23.905722   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:23.905727   51251 cri.go:89] found id: ""
	I1018 17:47:23.905735   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:23.905838   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:23.909628   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:23.913950   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:23.914019   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:23.946272   51251 cri.go:89] found id: ""
	I1018 17:47:23.946347   51251 logs.go:282] 0 containers: []
	W1018 17:47:23.946362   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:23.946370   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:23.946428   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:23.982078   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:23.982100   51251 cri.go:89] found id: ""
	I1018 17:47:23.982109   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:23.982162   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:23.985823   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:23.985895   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:24.020838   51251 cri.go:89] found id: ""
	I1018 17:47:24.020863   51251 logs.go:282] 0 containers: []
	W1018 17:47:24.020872   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:24.020881   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:24.020895   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:24.049680   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:24.049704   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:24.114947   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:24.114984   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:24.157780   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:24.157811   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:24.187365   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:24.187391   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:24.272125   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:24.264460   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.265126   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.266121   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.266734   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.268444   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:24.264460   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.265126   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.266121   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.266734   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.268444   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:24.272150   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:24.272162   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:24.351210   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:24.351246   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:24.379627   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:24.379654   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:24.459957   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:24.459991   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:24.490809   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:24.490834   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:24.594421   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:24.594457   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:27.106652   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:27.118797   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:27.118867   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:27.156694   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:27.156714   51251 cri.go:89] found id: ""
	I1018 17:47:27.156723   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:27.156776   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:27.160480   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:27.160550   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:27.187759   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:27.187780   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:27.187785   51251 cri.go:89] found id: ""
	I1018 17:47:27.187793   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:27.187855   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:27.191713   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:27.195093   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:27.195159   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:27.231641   51251 cri.go:89] found id: ""
	I1018 17:47:27.231663   51251 logs.go:282] 0 containers: []
	W1018 17:47:27.231671   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:27.231681   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:27.231737   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:27.259596   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:27.259614   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:27.259619   51251 cri.go:89] found id: ""
	I1018 17:47:27.259626   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:27.259678   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:27.263281   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:27.266728   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:27.266826   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:27.294104   51251 cri.go:89] found id: ""
	I1018 17:47:27.294127   51251 logs.go:282] 0 containers: []
	W1018 17:47:27.294139   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:27.294145   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:27.294205   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:27.321776   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:27.321798   51251 cri.go:89] found id: ""
	I1018 17:47:27.321806   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:27.321868   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:27.325558   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:27.325631   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:27.356639   51251 cri.go:89] found id: ""
	I1018 17:47:27.356666   51251 logs.go:282] 0 containers: []
	W1018 17:47:27.356674   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:27.356683   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:27.356694   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:27.462575   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:27.462610   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:27.529536   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:27.520733   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.521424   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.523093   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.523552   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.525157   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:27.520733   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.521424   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.523093   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.523552   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.525157   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:27.529559   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:27.529573   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:27.555154   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:27.555180   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:27.632084   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:27.632117   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:27.662590   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:27.662614   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:27.691692   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:27.691718   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:27.774358   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:27.774393   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:27.825515   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:27.825545   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:27.838343   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:27.838369   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:27.902992   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:27.903025   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:30.448737   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:30.460318   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:30.460398   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:30.488282   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:30.488306   51251 cri.go:89] found id: ""
	I1018 17:47:30.488314   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:30.488367   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:30.491908   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:30.491974   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:30.521041   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:30.521066   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:30.521071   51251 cri.go:89] found id: ""
	I1018 17:47:30.521079   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:30.521136   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:30.525103   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:30.528840   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:30.528916   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:30.562515   51251 cri.go:89] found id: ""
	I1018 17:47:30.562537   51251 logs.go:282] 0 containers: []
	W1018 17:47:30.562545   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:30.562551   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:30.562627   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:30.592562   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:30.592584   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:30.592589   51251 cri.go:89] found id: ""
	I1018 17:47:30.592596   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:30.592653   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:30.596706   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:30.600570   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:30.600692   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:30.627771   51251 cri.go:89] found id: ""
	I1018 17:47:30.627793   51251 logs.go:282] 0 containers: []
	W1018 17:47:30.627802   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:30.627808   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:30.627867   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:30.654477   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:30.654497   51251 cri.go:89] found id: ""
	I1018 17:47:30.654510   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:30.654565   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:30.658617   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:30.658686   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:30.689627   51251 cri.go:89] found id: ""
	I1018 17:47:30.689650   51251 logs.go:282] 0 containers: []
	W1018 17:47:30.689658   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:30.689667   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:30.689684   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:30.721050   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:30.721077   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:30.732370   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:30.732446   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:30.805446   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:30.796158   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.796640   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.798623   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.799026   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.800608   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:30.796158   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.796640   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.798623   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.799026   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.800608   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:30.805466   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:30.805478   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:30.830998   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:30.831024   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:30.906775   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:30.906811   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:30.940644   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:30.940671   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:31.026053   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:31.026089   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:31.137923   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:31.137966   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:31.233631   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:31.233668   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:31.264350   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:31.264374   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:33.793612   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:33.805648   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:33.805780   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:33.839954   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:33.840025   51251 cri.go:89] found id: ""
	I1018 17:47:33.840058   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:33.840138   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:33.844129   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:33.844243   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:33.871384   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:33.871408   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:33.871413   51251 cri.go:89] found id: ""
	I1018 17:47:33.871421   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:33.871476   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:33.875651   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:33.879420   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:33.879516   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:33.905649   51251 cri.go:89] found id: ""
	I1018 17:47:33.905676   51251 logs.go:282] 0 containers: []
	W1018 17:47:33.905684   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:33.905691   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:33.905749   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:33.934660   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:33.934683   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:33.934688   51251 cri.go:89] found id: ""
	I1018 17:47:33.934696   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:33.934780   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:33.938842   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:33.942670   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:33.942738   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:33.978544   51251 cri.go:89] found id: ""
	I1018 17:47:33.978568   51251 logs.go:282] 0 containers: []
	W1018 17:47:33.978576   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:33.978582   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:33.978643   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:34.012312   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:34.012389   51251 cri.go:89] found id: ""
	I1018 17:47:34.012468   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:34.012564   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:34.016868   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:34.017048   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:34.044577   51251 cri.go:89] found id: ""
	I1018 17:47:34.044648   51251 logs.go:282] 0 containers: []
	W1018 17:47:34.044668   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:34.044692   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:34.044729   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:34.072731   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:34.072799   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:34.103949   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:34.103978   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:34.117148   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:34.117176   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:34.197560   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:34.184268   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.184883   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.186363   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.186832   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.188578   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:34.184268   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.184883   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.186363   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.186832   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.188578   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:34.197584   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:34.197598   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:34.271679   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:34.271712   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:34.306656   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:34.306683   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:34.386272   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:34.386308   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:34.414077   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:34.414108   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:34.443807   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:34.443833   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:34.522683   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:34.522719   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:37.133400   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:37.147181   51251 out.go:203] 
	W1018 17:47:37.150020   51251 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1018 17:47:37.150063   51251 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1018 17:47:37.150073   51251 out.go:285] * Related issues:
	* Related issues:
	W1018 17:47:37.150088   51251 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1018 17:47:37.150102   51251 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1018 17:47:37.152991   51251 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-arm64 -p ha-181800 node list --alsologtostderr -v 5" : exit status 105
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-181800
helpers_test.go:243: (dbg) docker inspect ha-181800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2",
	        "Created": "2025-10-18T17:32:56.632116312Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 51376,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T17:39:46.245999615Z",
	            "FinishedAt": "2025-10-18T17:39:45.630064495Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/hostname",
	        "HostsPath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/hosts",
	        "LogPath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2-json.log",
	        "Name": "/ha-181800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-181800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-181800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2",
	                "LowerDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-181800",
	                "Source": "/var/lib/docker/volumes/ha-181800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-181800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-181800",
	                "name.minikube.sigs.k8s.io": "ha-181800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "efaac0f11b270c145ecb6a49cdddbc0cc50de47d14ed81303acfb3d93ecaef30",
	            "SandboxKey": "/var/run/docker/netns/efaac0f11b27",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32808"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32809"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32812"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32810"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32811"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-181800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:ba:f8:3c:6b:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "903568cdf824d38f52cb9a58c116a852c83eb599cf8cc87e25ba21b593e45142",
	                    "EndpointID": "af9b438a40e91de308acdf0827c862a018060c99dd48a4f5e67a2e361be9d341",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-181800",
	                        "5743bf3218eb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-181800 -n ha-181800
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-181800 logs -n 25: (2.314565253s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-181800 cp ha-181800-m03:/home/docker/cp-test.txt ha-181800-m02:/home/docker/cp-test_ha-181800-m03_ha-181800-m02.txt               │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m02 sudo cat /home/docker/cp-test_ha-181800-m03_ha-181800-m02.txt                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m03:/home/docker/cp-test.txt ha-181800-m04:/home/docker/cp-test_ha-181800-m03_ha-181800-m04.txt               │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test_ha-181800-m03_ha-181800-m04.txt                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp testdata/cp-test.txt ha-181800-m04:/home/docker/cp-test.txt                                                             │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1463328482/001/cp-test_ha-181800-m04.txt │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt ha-181800:/home/docker/cp-test_ha-181800-m04_ha-181800.txt                       │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800 sudo cat /home/docker/cp-test_ha-181800-m04_ha-181800.txt                                                 │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt ha-181800-m02:/home/docker/cp-test_ha-181800-m04_ha-181800-m02.txt               │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m02 sudo cat /home/docker/cp-test_ha-181800-m04_ha-181800-m02.txt                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt ha-181800-m03:/home/docker/cp-test_ha-181800-m04_ha-181800-m03.txt               │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m03 sudo cat /home/docker/cp-test_ha-181800-m04_ha-181800-m03.txt                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ node    │ ha-181800 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ node    │ ha-181800 node start m02 --alsologtostderr -v 5                                                                                      │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:39 UTC │
	│ node    │ ha-181800 node list --alsologtostderr -v 5                                                                                           │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:39 UTC │                     │
	│ stop    │ ha-181800 stop --alsologtostderr -v 5                                                                                                │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:39 UTC │ 18 Oct 25 17:39 UTC │
	│ start   │ ha-181800 start --wait true --alsologtostderr -v 5                                                                                   │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:39 UTC │                     │
	│ node    │ ha-181800 node list --alsologtostderr -v 5                                                                                           │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:47 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 17:39:45
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 17:39:45.975281   51251 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:39:45.975504   51251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:39:45.975531   51251 out.go:374] Setting ErrFile to fd 2...
	I1018 17:39:45.975549   51251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:39:45.975846   51251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:39:45.976262   51251 out.go:368] Setting JSON to false
	I1018 17:39:45.977169   51251 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4935,"bootTime":1760804251,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 17:39:45.977269   51251 start.go:141] virtualization:  
	I1018 17:39:45.980610   51251 out.go:179] * [ha-181800] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 17:39:45.984311   51251 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 17:39:45.984374   51251 notify.go:220] Checking for updates...
	I1018 17:39:45.990274   51251 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 17:39:45.993215   51251 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:39:45.996106   51251 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 17:39:45.999014   51251 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 17:39:46.004420   51251 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 17:39:46.008306   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:39:46.008436   51251 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 17:39:46.042019   51251 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 17:39:46.042131   51251 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:39:46.099091   51251 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 17:39:46.089556228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:39:46.099210   51251 docker.go:318] overlay module found
	I1018 17:39:46.102259   51251 out.go:179] * Using the docker driver based on existing profile
	I1018 17:39:46.105078   51251 start.go:305] selected driver: docker
	I1018 17:39:46.105099   51251 start.go:925] validating driver "docker" against &{Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:39:46.105237   51251 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 17:39:46.105338   51251 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:39:46.159602   51251 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 17:39:46.150874009 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:39:46.159982   51251 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:39:46.160020   51251 cni.go:84] Creating CNI manager for ""
	I1018 17:39:46.160080   51251 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1018 17:39:46.160126   51251 start.go:349] cluster config:
	{Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:39:46.165176   51251 out.go:179] * Starting "ha-181800" primary control-plane node in "ha-181800" cluster
	I1018 17:39:46.168051   51251 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:39:46.170939   51251 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:39:46.173836   51251 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:39:46.173896   51251 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 17:39:46.173911   51251 cache.go:58] Caching tarball of preloaded images
	I1018 17:39:46.173925   51251 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:39:46.173990   51251 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:39:46.174000   51251 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:39:46.174155   51251 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:39:46.192746   51251 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:39:46.192769   51251 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:39:46.192782   51251 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:39:46.192803   51251 start.go:360] acquireMachinesLock for ha-181800: {Name:mk3f5dfba2ab7d01f94f924dfcc5edab5f076901 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:39:46.192864   51251 start.go:364] duration metric: took 36.243µs to acquireMachinesLock for "ha-181800"
	I1018 17:39:46.192888   51251 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:39:46.192896   51251 fix.go:54] fixHost starting: 
	I1018 17:39:46.193211   51251 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:39:46.209470   51251 fix.go:112] recreateIfNeeded on ha-181800: state=Stopped err=<nil>
	W1018 17:39:46.209498   51251 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:39:46.212825   51251 out.go:252] * Restarting existing docker container for "ha-181800" ...
	I1018 17:39:46.212900   51251 cli_runner.go:164] Run: docker start ha-181800
	I1018 17:39:46.480673   51251 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:39:46.500591   51251 kic.go:430] container "ha-181800" state is running.
	I1018 17:39:46.501011   51251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:39:46.526396   51251 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:39:46.526638   51251 machine.go:93] provisionDockerMachine start ...
	I1018 17:39:46.526707   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:46.546472   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:46.546909   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1018 17:39:46.546927   51251 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:39:46.547526   51251 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 17:39:49.696893   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800
	
	I1018 17:39:49.696925   51251 ubuntu.go:182] provisioning hostname "ha-181800"
	I1018 17:39:49.697031   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:49.714524   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:49.714832   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1018 17:39:49.714849   51251 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181800 && echo "ha-181800" | sudo tee /etc/hostname
	I1018 17:39:49.873528   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800
	
	I1018 17:39:49.873612   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:49.891188   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:49.891504   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1018 17:39:49.891521   51251 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181800/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:39:50.037199   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:39:50.037228   51251 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:39:50.037247   51251 ubuntu.go:190] setting up certificates
	I1018 17:39:50.037257   51251 provision.go:84] configureAuth start
	I1018 17:39:50.037320   51251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:39:50.055129   51251 provision.go:143] copyHostCerts
	I1018 17:39:50.055181   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:39:50.055213   51251 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:39:50.055234   51251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:39:50.055314   51251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:39:50.055408   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:39:50.055430   51251 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:39:50.055438   51251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:39:50.055466   51251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:39:50.055525   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:39:50.055546   51251 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:39:50.055555   51251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:39:50.055581   51251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:39:50.055647   51251 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.ha-181800 san=[127.0.0.1 192.168.49.2 ha-181800 localhost minikube]
	I1018 17:39:50.382522   51251 provision.go:177] copyRemoteCerts
	I1018 17:39:50.382593   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:39:50.382633   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:50.403959   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:39:50.508789   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 17:39:50.508850   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:39:50.526450   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 17:39:50.526538   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1018 17:39:50.544187   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 17:39:50.544274   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:39:50.561987   51251 provision.go:87] duration metric: took 524.706666ms to configureAuth
	I1018 17:39:50.562063   51251 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:39:50.562317   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:39:50.562424   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:50.578939   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:50.579244   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1018 17:39:50.579264   51251 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:39:50.937128   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:39:50.937197   51251 machine.go:96] duration metric: took 4.410541s to provisionDockerMachine
	I1018 17:39:50.937222   51251 start.go:293] postStartSetup for "ha-181800" (driver="docker")
	I1018 17:39:50.937247   51251 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:39:50.937359   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:39:50.937444   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:50.959339   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:39:51.065300   51251 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:39:51.068761   51251 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:39:51.068792   51251 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:39:51.068803   51251 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:39:51.068858   51251 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:39:51.068963   51251 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:39:51.068976   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 17:39:51.069076   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 17:39:51.076928   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:39:51.094473   51251 start.go:296] duration metric: took 157.222631ms for postStartSetup
	I1018 17:39:51.094579   51251 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:39:51.094625   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:51.113220   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:39:51.213567   51251 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:39:51.218175   51251 fix.go:56] duration metric: took 5.025272015s for fixHost
	I1018 17:39:51.218200   51251 start.go:83] releasing machines lock for "ha-181800", held for 5.025323101s
	I1018 17:39:51.218283   51251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:39:51.235815   51251 ssh_runner.go:195] Run: cat /version.json
	I1018 17:39:51.235850   51251 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:39:51.235866   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:51.235904   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:51.261163   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:39:51.270603   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:39:51.360468   51251 ssh_runner.go:195] Run: systemctl --version
	I1018 17:39:51.454722   51251 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:39:51.498840   51251 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:39:51.503695   51251 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:39:51.503796   51251 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:39:51.511526   51251 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:39:51.511549   51251 start.go:495] detecting cgroup driver to use...
	I1018 17:39:51.511578   51251 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:39:51.511630   51251 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:39:51.526599   51251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:39:51.539484   51251 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:39:51.539576   51251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:39:51.554963   51251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:39:51.568183   51251 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:39:51.676636   51251 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:39:51.792230   51251 docker.go:234] disabling docker service ...
	I1018 17:39:51.792306   51251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:39:51.806847   51251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:39:51.819137   51251 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:39:51.938883   51251 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:39:52.058796   51251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:39:52.072487   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:39:52.088092   51251 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:39:52.088205   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.097568   51251 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:39:52.097729   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.107431   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.116597   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.125822   51251 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:39:52.134598   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.143667   51251 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.151898   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.160172   51251 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:39:52.167407   51251 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:39:52.174657   51251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:39:52.287403   51251 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:39:52.421729   51251 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:39:52.421850   51251 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:39:52.425707   51251 start.go:563] Will wait 60s for crictl version
	I1018 17:39:52.425813   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:39:52.429420   51251 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:39:52.453867   51251 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:39:52.453974   51251 ssh_runner.go:195] Run: crio --version
	I1018 17:39:52.486777   51251 ssh_runner.go:195] Run: crio --version
	I1018 17:39:52.520354   51251 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:39:52.523389   51251 cli_runner.go:164] Run: docker network inspect ha-181800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:39:52.539892   51251 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:39:52.543780   51251 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:39:52.553416   51251 kubeadm.go:883] updating cluster {Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 17:39:52.553576   51251 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:39:52.553634   51251 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 17:39:52.588251   51251 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 17:39:52.588276   51251 crio.go:433] Images already preloaded, skipping extraction
	I1018 17:39:52.588335   51251 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 17:39:52.613957   51251 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 17:39:52.613979   51251 cache_images.go:85] Images are preloaded, skipping loading
	I1018 17:39:52.613989   51251 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 17:39:52.614102   51251 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-181800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:39:52.614189   51251 ssh_runner.go:195] Run: crio config
	I1018 17:39:52.670252   51251 cni.go:84] Creating CNI manager for ""
	I1018 17:39:52.670275   51251 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1018 17:39:52.670294   51251 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 17:39:52.670319   51251 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-181800 NodeName:ha-181800 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 17:39:52.670455   51251 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-181800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 17:39:52.670475   51251 kube-vip.go:115] generating kube-vip config ...
	I1018 17:39:52.670529   51251 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 17:39:52.682279   51251 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:39:52.682377   51251 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 17:39:52.682436   51251 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:39:52.689950   51251 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:39:52.690041   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1018 17:39:52.697809   51251 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1018 17:39:52.710709   51251 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:39:52.723367   51251 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1018 17:39:52.735890   51251 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 17:39:52.748648   51251 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 17:39:52.752220   51251 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:39:52.762098   51251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:39:52.871320   51251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:39:52.886583   51251 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800 for IP: 192.168.49.2
	I1018 17:39:52.886603   51251 certs.go:195] generating shared ca certs ...
	I1018 17:39:52.886618   51251 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:39:52.886785   51251 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:39:52.886838   51251 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:39:52.886849   51251 certs.go:257] generating profile certs ...
	I1018 17:39:52.886923   51251 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key
	I1018 17:39:52.886953   51251 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.46a58690
	I1018 17:39:52.886970   51251 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt.46a58690 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1018 17:39:53.268315   51251 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt.46a58690 ...
	I1018 17:39:53.268348   51251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt.46a58690: {Name:mk0cc861493b9d286eed0bfb736b15e28a1706f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:39:53.268572   51251 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.46a58690 ...
	I1018 17:39:53.268589   51251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.46a58690: {Name:mk424cb4f615a1903e846801cb9cb2e734afdfb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:39:53.268677   51251 certs.go:382] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt.46a58690 -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt
	I1018 17:39:53.268822   51251 certs.go:386] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.46a58690 -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key
	I1018 17:39:53.268969   51251 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key
	I1018 17:39:53.268988   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 17:39:53.269005   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 17:39:53.269023   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 17:39:53.269043   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 17:39:53.269070   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 17:39:53.269094   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 17:39:53.269112   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 17:39:53.269123   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 17:39:53.269179   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:39:53.269213   51251 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:39:53.269225   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:39:53.269249   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:39:53.269273   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:39:53.269299   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:39:53.269346   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:39:53.269376   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 17:39:53.269392   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:39:53.269403   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 17:39:53.269946   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:39:53.289258   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:39:53.307330   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:39:53.325012   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:39:53.342168   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 17:39:53.359559   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 17:39:53.376235   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 17:39:53.393388   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 17:39:53.409944   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:39:53.427591   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:39:53.443532   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:39:53.459786   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 17:39:53.472627   51251 ssh_runner.go:195] Run: openssl version
	I1018 17:39:53.478997   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:39:53.486807   51251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:39:53.490229   51251 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:39:53.490289   51251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:39:53.534916   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:39:53.547040   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:39:53.561930   51251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:39:53.567602   51251 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:39:53.567707   51251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:39:53.617018   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:39:53.628559   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:39:53.641445   51251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:39:53.645568   51251 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:39:53.645680   51251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:39:53.715014   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:39:53.744004   51251 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:39:53.751940   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 17:39:53.829686   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 17:39:53.890601   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 17:39:53.957371   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 17:39:54.017003   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 17:39:54.064655   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 17:39:54.111921   51251 kubeadm.go:400] StartCluster: {Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:39:54.112099   51251 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:39:54.112174   51251 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:39:54.163162   51251 cri.go:89] found id: "dda012a63c45a5c37a124da696c59f0ac82f51c6728ee30f5a6b3a9df6f28b54"
	I1018 17:39:54.163230   51251 cri.go:89] found id: "ac8ef32697a356e273cd1b84ce23b6e628c802ef7b211f001fc50bb472635814"
	I1018 17:39:54.163250   51251 cri.go:89] found id: "4957aae3df6cdc996ba2129d1f43210ebdec1c480e6db0115ee34f32691af151"
	I1018 17:39:54.163265   51251 cri.go:89] found id: "6e9b6c2f0e69c56776af6be092e8313aef540b7319fd0664f3eb3f947353a66b"
	I1018 17:39:54.163282   51251 cri.go:89] found id: "a0776ff98d8411ec5ae52a11de472cb17e1d8c764d642bf18a22aec8b44a08ee"
	I1018 17:39:54.163300   51251 cri.go:89] found id: ""
	I1018 17:39:54.163370   51251 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 17:39:54.178952   51251 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:39:54Z" level=error msg="open /run/runc: no such file or directory"
	I1018 17:39:54.179088   51251 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 17:39:54.202035   51251 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 17:39:54.202104   51251 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 17:39:54.202180   51251 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 17:39:54.218306   51251 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:39:54.218743   51251 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-181800" does not appear in /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:39:54.218882   51251 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-2509/kubeconfig needs updating (will repair): [kubeconfig missing "ha-181800" cluster setting kubeconfig missing "ha-181800" context setting]
	I1018 17:39:54.219252   51251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:39:54.219794   51251 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 17:39:54.220519   51251 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 17:39:54.220606   51251 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 17:39:54.220635   51251 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 17:39:54.220585   51251 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1018 17:39:54.220726   51251 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 17:39:54.220753   51251 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 17:39:54.221075   51251 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 17:39:54.234375   51251 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1018 17:39:54.234436   51251 kubeadm.go:601] duration metric: took 32.30335ms to restartPrimaryControlPlane
	I1018 17:39:54.234460   51251 kubeadm.go:402] duration metric: took 122.54698ms to StartCluster
	I1018 17:39:54.234487   51251 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:39:54.234565   51251 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:39:54.235140   51251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:39:54.235365   51251 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:39:54.235417   51251 start.go:241] waiting for startup goroutines ...
	I1018 17:39:54.235446   51251 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 17:39:54.235957   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:39:54.241374   51251 out.go:179] * Enabled addons: 
	I1018 17:39:54.244317   51251 addons.go:514] duration metric: took 8.873213ms for enable addons: enabled=[]
	I1018 17:39:54.244381   51251 start.go:246] waiting for cluster config update ...
	I1018 17:39:54.244403   51251 start.go:255] writing updated cluster config ...
	I1018 17:39:54.247646   51251 out.go:203] 
	I1018 17:39:54.250620   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:39:54.250787   51251 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:39:54.254182   51251 out.go:179] * Starting "ha-181800-m02" control-plane node in "ha-181800" cluster
	I1018 17:39:54.257073   51251 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:39:54.259992   51251 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:39:54.262894   51251 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:39:54.262941   51251 cache.go:58] Caching tarball of preloaded images
	I1018 17:39:54.263061   51251 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:39:54.263094   51251 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:39:54.263229   51251 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:39:54.263458   51251 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:39:54.291252   51251 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:39:54.291269   51251 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:39:54.291282   51251 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:39:54.291303   51251 start.go:360] acquireMachinesLock for ha-181800-m02: {Name:mk36a488c0fbfc8557c6ba291b969aad85b45635 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:39:54.291352   51251 start.go:364] duration metric: took 33.977µs to acquireMachinesLock for "ha-181800-m02"
	I1018 17:39:54.291370   51251 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:39:54.291375   51251 fix.go:54] fixHost starting: m02
	I1018 17:39:54.291629   51251 cli_runner.go:164] Run: docker container inspect ha-181800-m02 --format={{.State.Status}}
	I1018 17:39:54.318512   51251 fix.go:112] recreateIfNeeded on ha-181800-m02: state=Stopped err=<nil>
	W1018 17:39:54.318536   51251 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:39:54.321781   51251 out.go:252] * Restarting existing docker container for "ha-181800-m02" ...
	I1018 17:39:54.321859   51251 cli_runner.go:164] Run: docker start ha-181800-m02
	I1018 17:39:54.692758   51251 cli_runner.go:164] Run: docker container inspect ha-181800-m02 --format={{.State.Status}}
	I1018 17:39:54.723920   51251 kic.go:430] container "ha-181800-m02" state is running.
	I1018 17:39:54.724263   51251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:39:54.749215   51251 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:39:54.749467   51251 machine.go:93] provisionDockerMachine start ...
	I1018 17:39:54.749523   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:39:54.781536   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:54.781830   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1018 17:39:54.781839   51251 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:39:54.782427   51251 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39794->127.0.0.1:32813: read: connection reset by peer
	I1018 17:39:58.082162   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m02
	
	I1018 17:39:58.082184   51251 ubuntu.go:182] provisioning hostname "ha-181800-m02"
	I1018 17:39:58.082261   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:39:58.126530   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:58.126844   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1018 17:39:58.126855   51251 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181800-m02 && echo "ha-181800-m02" | sudo tee /etc/hostname
	I1018 17:39:58.443573   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m02
	
	I1018 17:39:58.443690   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:39:58.478907   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:58.479213   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1018 17:39:58.479243   51251 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181800-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181800-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181800-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:39:58.737653   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:39:58.737680   51251 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:39:58.737725   51251 ubuntu.go:190] setting up certificates
	I1018 17:39:58.737736   51251 provision.go:84] configureAuth start
	I1018 17:39:58.737818   51251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:39:58.774675   51251 provision.go:143] copyHostCerts
	I1018 17:39:58.774718   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:39:58.774757   51251 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:39:58.774769   51251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:39:58.774848   51251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:39:58.774946   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:39:58.774970   51251 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:39:58.774977   51251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:39:58.775018   51251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:39:58.775074   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:39:58.775100   51251 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:39:58.775109   51251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:39:58.775135   51251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:39:58.775197   51251 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.ha-181800-m02 san=[127.0.0.1 192.168.49.3 ha-181800-m02 localhost minikube]
	I1018 17:39:59.196567   51251 provision.go:177] copyRemoteCerts
	I1018 17:39:59.197114   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:39:59.197174   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:39:59.222600   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:39:59.394297   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 17:39:59.394389   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:39:59.450203   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 17:39:59.450288   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 17:39:59.513512   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 17:39:59.513624   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:39:59.573995   51251 provision.go:87] duration metric: took 836.238905ms to configureAuth
	I1018 17:39:59.574021   51251 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:39:59.574290   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:39:59.574415   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:39:59.606597   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:59.606908   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1018 17:39:59.606927   51251 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:40:00.196427   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:40:00.196520   51251 machine.go:96] duration metric: took 5.447042221s to provisionDockerMachine
	I1018 17:40:00.196547   51251 start.go:293] postStartSetup for "ha-181800-m02" (driver="docker")
	I1018 17:40:00.196572   51251 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:40:00.196694   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:40:00.196782   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:40:00.238873   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:40:00.392500   51251 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:40:00.403930   51251 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:40:00.403959   51251 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:40:00.403971   51251 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:40:00.404043   51251 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:40:00.404125   51251 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:40:00.404133   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 17:40:00.404244   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 17:40:00.423321   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:40:00.459796   51251 start.go:296] duration metric: took 263.21852ms for postStartSetup
	I1018 17:40:00.459966   51251 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:40:00.460049   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:40:00.503330   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:40:00.631049   51251 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:40:00.645680   51251 fix.go:56] duration metric: took 6.354295561s for fixHost
	I1018 17:40:00.645709   51251 start.go:83] releasing machines lock for "ha-181800-m02", held for 6.35434937s
	I1018 17:40:00.645791   51251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:40:00.682830   51251 out.go:179] * Found network options:
	I1018 17:40:00.685894   51251 out.go:179]   - NO_PROXY=192.168.49.2
	W1018 17:40:00.688804   51251 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:40:00.688858   51251 proxy.go:120] fail to check proxy env: Error ip not in block
	I1018 17:40:00.688930   51251 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:40:00.689085   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:40:00.689351   51251 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:40:00.689409   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:40:00.730142   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:40:00.730174   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:40:01.294197   51251 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:40:01.312592   51251 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:40:01.312744   51251 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:40:01.330228   51251 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:40:01.330302   51251 start.go:495] detecting cgroup driver to use...
	I1018 17:40:01.330348   51251 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:40:01.330425   51251 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:40:01.357073   51251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:40:01.416356   51251 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:40:01.416475   51251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:40:01.453551   51251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:40:01.481435   51251 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:40:01.742441   51251 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:40:01.978817   51251 docker.go:234] disabling docker service ...
	I1018 17:40:01.978936   51251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:40:02.001514   51251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:40:02.021678   51251 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:40:02.249968   51251 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:40:02.480556   51251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:40:02.498908   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:40:02.526424   51251 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:40:02.526493   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.542071   51251 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:40:02.542141   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.559770   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.574006   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.589455   51251 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:40:02.598587   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.612076   51251 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.624069   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.637136   51251 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:40:02.652415   51251 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:40:02.662181   51251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:40:02.863894   51251 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:41:33.166156   51251 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.302227656s)
	I1018 17:41:33.166194   51251 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:41:33.166252   51251 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:41:33.170771   51251 start.go:563] Will wait 60s for crictl version
	I1018 17:41:33.170830   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:41:33.176098   51251 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:41:33.213255   51251 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:41:33.213351   51251 ssh_runner.go:195] Run: crio --version
	I1018 17:41:33.258540   51251 ssh_runner.go:195] Run: crio --version
	I1018 17:41:33.296286   51251 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:41:33.299353   51251 out.go:179]   - env NO_PROXY=192.168.49.2
	I1018 17:41:33.302220   51251 cli_runner.go:164] Run: docker network inspect ha-181800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:41:33.319775   51251 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:41:33.324290   51251 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:41:33.336317   51251 mustload.go:65] Loading cluster: ha-181800
	I1018 17:41:33.336557   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:41:33.336817   51251 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:41:33.362604   51251 host.go:66] Checking if "ha-181800" exists ...
	I1018 17:41:33.362892   51251 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800 for IP: 192.168.49.3
	I1018 17:41:33.362901   51251 certs.go:195] generating shared ca certs ...
	I1018 17:41:33.362915   51251 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:41:33.363034   51251 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:41:33.363081   51251 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:41:33.363088   51251 certs.go:257] generating profile certs ...
	I1018 17:41:33.363157   51251 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key
	I1018 17:41:33.363222   51251 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.887e0b27
	I1018 17:41:33.363266   51251 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key
	I1018 17:41:33.363274   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 17:41:33.363286   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 17:41:33.363296   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 17:41:33.363306   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 17:41:33.363316   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 17:41:33.363328   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 17:41:33.363338   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 17:41:33.363348   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 17:41:33.363398   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:41:33.363424   51251 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:41:33.363433   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:41:33.363455   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:41:33.363476   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:41:33.363496   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:41:33.363536   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:41:33.363565   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:41:33.363579   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 17:41:33.363590   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 17:41:33.363643   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:41:33.388336   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:41:33.489250   51251 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1018 17:41:33.493494   51251 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1018 17:41:33.511835   51251 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1018 17:41:33.515898   51251 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1018 17:41:33.524188   51251 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1018 17:41:33.527936   51251 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1018 17:41:33.536545   51251 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1018 17:41:33.540347   51251 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1018 17:41:33.549002   51251 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1018 17:41:33.552698   51251 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1018 17:41:33.561692   51251 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1018 17:41:33.565522   51251 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1018 17:41:33.574471   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:41:33.598033   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:41:33.620604   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:41:33.644520   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:41:33.671246   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 17:41:33.694599   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 17:41:33.716649   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 17:41:33.739805   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 17:41:33.761744   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:41:33.784279   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:41:33.807665   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:41:33.831497   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1018 17:41:33.845903   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1018 17:41:33.860149   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1018 17:41:33.874010   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1018 17:41:33.893500   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1018 17:41:33.908151   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1018 17:41:33.922971   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1018 17:41:33.937486   51251 ssh_runner.go:195] Run: openssl version
	I1018 17:41:33.944301   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:41:33.953654   51251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:41:33.958036   51251 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:41:33.958171   51251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:41:34.004993   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:41:34.015337   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:41:34.024718   51251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:41:34.029508   51251 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:41:34.029667   51251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:41:34.076487   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:41:34.085949   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:41:34.095637   51251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:41:34.100153   51251 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:41:34.100269   51251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:41:34.148268   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:41:34.158037   51251 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:41:34.162480   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 17:41:34.206936   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 17:41:34.251076   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 17:41:34.294598   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 17:41:34.337252   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 17:41:34.379050   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 17:41:34.422861   51251 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1018 17:41:34.423031   51251 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-181800-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:41:34.423078   51251 kube-vip.go:115] generating kube-vip config ...
	I1018 17:41:34.423166   51251 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 17:41:34.435895   51251 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:41:34.435996   51251 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 17:41:34.436081   51251 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:41:34.444655   51251 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:41:34.444772   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1018 17:41:34.452743   51251 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 17:41:34.466348   51251 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:41:34.479899   51251 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 17:41:34.497063   51251 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 17:41:34.500892   51251 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:41:34.516267   51251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:41:34.674326   51251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:41:34.690850   51251 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:41:34.691288   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:41:34.696864   51251 out.go:179] * Verifying Kubernetes components...
	I1018 17:41:34.699590   51251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:41:34.858485   51251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:41:34.875760   51251 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1018 17:41:34.876060   51251 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1018 17:41:34.876378   51251 node_ready.go:35] waiting up to 6m0s for node "ha-181800-m02" to be "Ready" ...
	I1018 17:41:41.842514   51251 node_ready.go:49] node "ha-181800-m02" is "Ready"
	I1018 17:41:41.842547   51251 node_ready.go:38] duration metric: took 6.966151068s for node "ha-181800-m02" to be "Ready" ...
	I1018 17:41:41.842561   51251 api_server.go:52] waiting for apiserver process to appear ...
	I1018 17:41:41.842620   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:42.343686   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:42.843043   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:43.343313   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:43.843326   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:44.343648   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:44.843315   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:45.342911   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:45.842777   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:46.343420   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:46.843693   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:47.342746   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:47.843464   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:48.342878   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:48.843391   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:49.342759   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:49.843483   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:50.342789   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:50.842761   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:51.342785   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:51.843356   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:52.342785   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:52.843177   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:53.342698   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:53.842872   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:54.343544   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:54.842904   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:55.343425   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:55.843434   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:56.343297   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:56.843518   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:57.343357   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:57.842816   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:58.343642   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:58.842783   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:59.343043   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:59.843412   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:00.342951   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:00.843389   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:01.342774   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:01.842787   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:02.343236   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:02.842685   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:03.342751   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:03.843695   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:04.342729   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:04.843543   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:05.343721   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:05.843447   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:06.342743   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:06.842790   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:07.343656   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:07.843541   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:08.343267   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:08.843707   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:09.342771   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:09.843748   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:10.342856   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:10.842752   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:11.343307   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:11.842677   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:12.343443   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:12.843733   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:13.343641   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:13.842734   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:14.343649   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:14.842779   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:15.342756   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:15.842763   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:16.343741   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:16.842779   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:17.342825   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:17.843340   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:18.342759   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:18.842772   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:19.342755   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:19.842777   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:20.343137   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:20.843594   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:21.343397   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:21.843388   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:22.342798   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:22.843107   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:23.343587   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:23.842910   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:24.343458   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:24.843264   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:25.342775   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:25.842894   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:26.343732   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:26.842775   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:27.342787   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:27.842760   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:28.342772   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:28.843266   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:29.343220   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:29.843228   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:30.343087   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:30.842732   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:31.342878   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:31.843084   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:32.343181   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:32.843480   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:33.343320   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:33.842755   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:34.342929   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:34.842842   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:34.842930   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:34.869988   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:34.870010   51251 cri.go:89] found id: ""
	I1018 17:42:34.870018   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:34.870073   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:34.873710   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:34.873778   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:34.899173   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:34.899196   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:34.899202   51251 cri.go:89] found id: ""
	I1018 17:42:34.899209   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:34.899263   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:34.903214   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:34.906828   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:34.906903   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:34.933625   51251 cri.go:89] found id: ""
	I1018 17:42:34.933648   51251 logs.go:282] 0 containers: []
	W1018 17:42:34.933656   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:34.933663   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:34.933723   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:34.959655   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:34.959675   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:34.959680   51251 cri.go:89] found id: ""
	I1018 17:42:34.959688   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:34.959743   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:34.972509   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:34.977434   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:34.977506   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:35.014139   51251 cri.go:89] found id: ""
	I1018 17:42:35.014165   51251 logs.go:282] 0 containers: []
	W1018 17:42:35.014173   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:35.014180   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:35.014287   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:35.047968   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:35.047993   51251 cri.go:89] found id: ""
	I1018 17:42:35.048002   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:35.048056   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:35.052096   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:35.052159   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:35.087604   51251 cri.go:89] found id: ""
	I1018 17:42:35.087628   51251 logs.go:282] 0 containers: []
	W1018 17:42:35.087636   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:35.087645   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:35.087658   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:35.135319   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:35.135352   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:35.186498   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:35.186531   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:35.217338   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:35.217381   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:35.327154   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:35.327184   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:35.341645   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:35.341672   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:35.747254   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:35.739248    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.739909    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.741574    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.742106    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.743686    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:35.739248    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.739909    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.741574    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.742106    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.743686    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:35.747277   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:35.747291   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:35.784796   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:35.784825   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:35.811760   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:35.811786   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:35.886991   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:35.887025   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:35.921904   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:35.921933   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:38.449291   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:38.459790   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:38.459857   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:38.486350   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:38.486373   51251 cri.go:89] found id: ""
	I1018 17:42:38.486383   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:38.486444   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:38.490359   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:38.490430   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:38.518049   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:38.518073   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:38.518078   51251 cri.go:89] found id: ""
	I1018 17:42:38.518097   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:38.518156   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:38.522183   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:38.526138   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:38.526213   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:38.552857   51251 cri.go:89] found id: ""
	I1018 17:42:38.552881   51251 logs.go:282] 0 containers: []
	W1018 17:42:38.552890   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:38.552896   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:38.552996   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:38.581427   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:38.581447   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:38.581452   51251 cri.go:89] found id: ""
	I1018 17:42:38.581460   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:38.581516   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:38.585308   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:38.588834   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:38.588907   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:38.626035   51251 cri.go:89] found id: ""
	I1018 17:42:38.626060   51251 logs.go:282] 0 containers: []
	W1018 17:42:38.626068   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:38.626074   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:38.626180   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:38.654519   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:38.654541   51251 cri.go:89] found id: ""
	I1018 17:42:38.654549   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:38.654606   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:38.659468   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:38.659536   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:38.685688   51251 cri.go:89] found id: ""
	I1018 17:42:38.685717   51251 logs.go:282] 0 containers: []
	W1018 17:42:38.685726   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:38.685735   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:38.685747   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:38.783795   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:38.783829   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:38.826341   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:38.826373   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:38.860295   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:38.860328   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:38.914363   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:38.914398   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:38.945563   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:38.945589   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:38.986953   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:38.986976   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:39.069689   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:39.069729   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:39.111763   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:39.111827   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:39.125634   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:39.125711   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:39.199836   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:39.189569    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.190870    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.192604    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.193407    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.194944    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:39.189569    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.190870    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.192604    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.193407    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.194944    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:39.199901   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:39.199927   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:41.727280   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:41.737746   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:41.737830   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:41.764569   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:41.764587   51251 cri.go:89] found id: ""
	I1018 17:42:41.764595   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:41.764651   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:41.768619   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:41.768692   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:41.795219   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:41.795239   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:41.795244   51251 cri.go:89] found id: ""
	I1018 17:42:41.795251   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:41.795315   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:41.799045   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:41.802635   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:41.802708   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:41.829223   51251 cri.go:89] found id: ""
	I1018 17:42:41.829246   51251 logs.go:282] 0 containers: []
	W1018 17:42:41.829256   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:41.829262   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:41.829319   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:41.863591   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:41.863612   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:41.863617   51251 cri.go:89] found id: ""
	I1018 17:42:41.863625   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:41.863708   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:41.867633   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:41.871288   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:41.871365   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:41.907130   51251 cri.go:89] found id: ""
	I1018 17:42:41.907154   51251 logs.go:282] 0 containers: []
	W1018 17:42:41.907162   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:41.907179   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:41.907239   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:41.937193   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:41.937215   51251 cri.go:89] found id: ""
	I1018 17:42:41.937223   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:41.937281   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:41.941168   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:41.941244   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:41.993845   51251 cri.go:89] found id: ""
	I1018 17:42:41.993923   51251 logs.go:282] 0 containers: []
	W1018 17:42:41.993944   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:41.993955   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:41.993967   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:42.041265   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:42.041296   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:42.070875   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:42.070904   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:42.106610   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:42.106642   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:42.194367   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:42.194403   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:42.229250   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:42.229279   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:42.283222   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:42.283254   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:42.343661   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:42.343694   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:42.376582   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:42.376608   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:42.475562   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:42.475597   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:42.488812   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:42.488842   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:42.564172   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:42.556222    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.556691    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.558297    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.558653    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.560347    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:42.556222    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.556691    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.558297    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.558653    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.560347    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:45.065078   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:45.086837   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:45.086979   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:45.165006   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:45.165027   51251 cri.go:89] found id: ""
	I1018 17:42:45.165035   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:45.165103   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:45.172323   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:45.172423   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:45.217483   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:45.217515   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:45.217521   51251 cri.go:89] found id: ""
	I1018 17:42:45.217530   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:45.217596   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:45.223128   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:45.227931   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:45.228025   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:45.283738   51251 cri.go:89] found id: ""
	I1018 17:42:45.283769   51251 logs.go:282] 0 containers: []
	W1018 17:42:45.283789   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:45.283818   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:45.283897   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:45.321652   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:45.321679   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:45.321685   51251 cri.go:89] found id: ""
	I1018 17:42:45.321694   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:45.321760   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:45.332292   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:45.337760   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:45.338055   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:45.381645   51251 cri.go:89] found id: ""
	I1018 17:42:45.381666   51251 logs.go:282] 0 containers: []
	W1018 17:42:45.381675   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:45.381681   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:45.381740   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:45.413702   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:45.413726   51251 cri.go:89] found id: ""
	I1018 17:42:45.413735   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:45.413793   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:45.417551   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:45.417654   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:45.444154   51251 cri.go:89] found id: ""
	I1018 17:42:45.444178   51251 logs.go:282] 0 containers: []
	W1018 17:42:45.444186   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:45.444195   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:45.444206   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:45.537154   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:45.537189   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:45.618318   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:45.608985    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.610405    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.610978    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.612722    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.613098    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:45.608985    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.610405    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.610978    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.612722    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.613098    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:45.618339   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:45.618352   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:45.643567   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:45.643592   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:45.680148   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:45.680183   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:45.732576   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:45.732648   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:45.763213   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:45.763299   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:45.790736   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:45.790804   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:45.802909   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:45.802991   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:45.850168   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:45.850251   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:45.926703   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:45.926741   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:48.486114   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:48.497086   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:48.497160   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:48.525605   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:48.525625   51251 cri.go:89] found id: ""
	I1018 17:42:48.525634   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:48.525690   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:48.529399   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:48.529536   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:48.556240   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:48.556261   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:48.556267   51251 cri.go:89] found id: ""
	I1018 17:42:48.556274   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:48.556331   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:48.560148   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:48.563747   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:48.563816   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:48.591484   51251 cri.go:89] found id: ""
	I1018 17:42:48.591509   51251 logs.go:282] 0 containers: []
	W1018 17:42:48.591518   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:48.591524   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:48.591584   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:48.621441   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:48.621461   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:48.621467   51251 cri.go:89] found id: ""
	I1018 17:42:48.621475   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:48.621531   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:48.625098   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:48.628679   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:48.628776   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:48.655455   51251 cri.go:89] found id: ""
	I1018 17:42:48.655477   51251 logs.go:282] 0 containers: []
	W1018 17:42:48.655486   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:48.655492   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:48.655574   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:48.686750   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:48.686773   51251 cri.go:89] found id: ""
	I1018 17:42:48.686781   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:48.686841   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:48.690841   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:48.690946   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:48.718158   51251 cri.go:89] found id: ""
	I1018 17:42:48.718186   51251 logs.go:282] 0 containers: []
	W1018 17:42:48.718194   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:48.718203   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:48.718213   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:48.823716   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:48.823756   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:48.901683   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:48.892565    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.893314    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.895024    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.895911    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.897573    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:48.892565    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.893314    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.895024    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.895911    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.897573    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:48.901743   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:48.901756   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:48.946710   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:48.946741   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:48.989214   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:48.989249   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:49.018928   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:49.018952   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:49.063728   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:49.063755   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:49.075796   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:49.075823   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:49.107128   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:49.107155   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:49.174004   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:49.174037   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:49.202814   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:49.202883   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:51.788673   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:51.804334   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:51.804402   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:51.832430   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:51.832451   51251 cri.go:89] found id: ""
	I1018 17:42:51.832459   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:51.832517   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:51.836251   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:51.836320   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:51.862897   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:51.862919   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:51.862924   51251 cri.go:89] found id: ""
	I1018 17:42:51.862931   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:51.862985   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:51.866673   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:51.870113   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:51.870200   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:51.895781   51251 cri.go:89] found id: ""
	I1018 17:42:51.895805   51251 logs.go:282] 0 containers: []
	W1018 17:42:51.895813   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:51.895820   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:51.895878   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:51.922494   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:51.922516   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:51.922521   51251 cri.go:89] found id: ""
	I1018 17:42:51.922528   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:51.922581   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:51.926209   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:51.929576   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:51.929673   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:51.956090   51251 cri.go:89] found id: ""
	I1018 17:42:51.956114   51251 logs.go:282] 0 containers: []
	W1018 17:42:51.956122   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:51.956129   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:51.956187   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:51.988490   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:51.988512   51251 cri.go:89] found id: ""
	I1018 17:42:51.988520   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:51.988574   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:51.992080   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:51.992159   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:52.021598   51251 cri.go:89] found id: ""
	I1018 17:42:52.021624   51251 logs.go:282] 0 containers: []
	W1018 17:42:52.021632   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:52.021642   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:52.021655   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:52.117617   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:52.117653   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:52.176829   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:52.177096   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:52.221507   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:52.221581   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:52.290597   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:52.290630   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:52.318933   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:52.318959   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:52.397646   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:52.397679   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:52.429557   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:52.429592   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:52.441410   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:52.441440   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:52.515237   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:52.505394    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.506908    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.507495    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.509107    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.509748    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:52.505394    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.506908    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.507495    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.509107    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.509748    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:52.515259   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:52.515272   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:52.546325   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:52.546352   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:55.073960   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:55.087265   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:55.087396   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:55.118731   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:55.118751   51251 cri.go:89] found id: ""
	I1018 17:42:55.118760   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:55.118827   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:55.122773   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:55.122841   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:55.160245   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:55.160267   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:55.160284   51251 cri.go:89] found id: ""
	I1018 17:42:55.160293   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:55.160353   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:55.164073   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:55.167693   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:55.167805   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:55.194629   51251 cri.go:89] found id: ""
	I1018 17:42:55.194653   51251 logs.go:282] 0 containers: []
	W1018 17:42:55.194661   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:55.194668   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:55.194741   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:55.222517   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:55.222579   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:55.222590   51251 cri.go:89] found id: ""
	I1018 17:42:55.222599   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:55.222655   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:55.226357   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:55.230025   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:55.230092   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:55.263792   51251 cri.go:89] found id: ""
	I1018 17:42:55.263816   51251 logs.go:282] 0 containers: []
	W1018 17:42:55.263824   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:55.263830   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:55.263889   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:55.291220   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:55.291241   51251 cri.go:89] found id: ""
	I1018 17:42:55.291249   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:55.291325   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:55.294934   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:55.295010   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:55.326586   51251 cri.go:89] found id: ""
	I1018 17:42:55.326609   51251 logs.go:282] 0 containers: []
	W1018 17:42:55.326617   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:55.326654   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:55.326671   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:55.401452   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:55.392275    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.393074    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.393930    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.395756    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.396145    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:55.392275    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.393074    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.393930    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.395756    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.396145    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:55.401476   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:55.401489   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:55.447692   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:55.447728   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:55.491129   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:55.491159   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:55.568889   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:55.568926   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:55.604397   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:55.604423   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:55.621149   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:55.621188   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:55.649355   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:55.649383   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:55.703784   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:55.703820   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:55.742564   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:55.742592   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:55.771921   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:55.771952   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:58.379973   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:58.390987   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:58.391064   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:58.420177   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:58.420206   51251 cri.go:89] found id: ""
	I1018 17:42:58.420214   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:58.420280   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:58.423975   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:58.424051   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:58.450210   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:58.450232   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:58.450237   51251 cri.go:89] found id: ""
	I1018 17:42:58.450244   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:58.450302   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:58.454890   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:58.458701   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:58.458770   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:58.483310   51251 cri.go:89] found id: ""
	I1018 17:42:58.483334   51251 logs.go:282] 0 containers: []
	W1018 17:42:58.483342   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:58.483348   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:58.483405   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:58.511930   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:58.511958   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:58.511963   51251 cri.go:89] found id: ""
	I1018 17:42:58.511970   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:58.512025   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:58.515745   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:58.519340   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:58.519409   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:58.546212   51251 cri.go:89] found id: ""
	I1018 17:42:58.546233   51251 logs.go:282] 0 containers: []
	W1018 17:42:58.546250   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:58.546257   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:58.546336   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:58.573991   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:58.574011   51251 cri.go:89] found id: ""
	I1018 17:42:58.574019   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:58.574073   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:58.577989   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:58.578068   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:58.609463   51251 cri.go:89] found id: ""
	I1018 17:42:58.609485   51251 logs.go:282] 0 containers: []
	W1018 17:42:58.609493   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:58.609520   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:58.609542   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:58.623900   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:58.623929   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:58.672129   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:58.672159   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:58.702420   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:58.702447   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:58.739914   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:58.739941   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:58.840389   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:58.840423   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:58.904498   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:58.896431    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.896966    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.898915    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.899719    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.901011    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:58.896431    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.896966    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.898915    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.899719    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.901011    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:58.904519   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:58.904534   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:58.933888   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:58.933915   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:58.967554   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:58.967628   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:59.028427   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:59.028504   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:59.054221   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:59.054249   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:01.639025   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:01.651715   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:01.651793   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:01.685240   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:01.685309   51251 cri.go:89] found id: ""
	I1018 17:43:01.685339   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:01.685423   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:01.690385   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:01.690468   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:01.719962   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:01.720035   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:01.720055   51251 cri.go:89] found id: ""
	I1018 17:43:01.720076   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:01.720148   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:01.723990   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:01.727538   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:01.727607   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:01.756529   51251 cri.go:89] found id: ""
	I1018 17:43:01.756562   51251 logs.go:282] 0 containers: []
	W1018 17:43:01.756571   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:01.756595   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:01.756676   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:01.789556   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:01.789581   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:01.789586   51251 cri.go:89] found id: ""
	I1018 17:43:01.789594   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:01.789659   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:01.794374   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:01.798060   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:01.798129   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:01.833059   51251 cri.go:89] found id: ""
	I1018 17:43:01.833089   51251 logs.go:282] 0 containers: []
	W1018 17:43:01.833097   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:01.833103   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:01.833172   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:01.860988   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:01.861009   51251 cri.go:89] found id: ""
	I1018 17:43:01.861017   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:01.861076   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:01.865838   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:01.865913   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:01.893009   51251 cri.go:89] found id: ""
	I1018 17:43:01.893035   51251 logs.go:282] 0 containers: []
	W1018 17:43:01.893043   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:01.893052   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:01.893064   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:01.997703   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:01.997739   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:02.060549   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:02.060581   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:02.094970   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:02.095001   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:02.161721   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:02.161757   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:02.209000   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:02.209029   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:02.239896   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:02.239920   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:02.275701   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:02.275727   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:02.288373   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:02.288400   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:02.360448   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:02.351719    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.352549    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.354058    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.354626    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.356320    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:02.351719    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.352549    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.354058    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.354626    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.356320    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:02.360469   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:02.360481   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:02.390739   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:02.390769   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:04.978257   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:04.988916   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:04.989037   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:05.019550   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:05.019573   51251 cri.go:89] found id: ""
	I1018 17:43:05.019582   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:05.019646   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:05.023992   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:05.024069   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:05.050514   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:05.050533   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:05.050538   51251 cri.go:89] found id: ""
	I1018 17:43:05.050546   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:05.050601   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:05.054386   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:05.058083   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:05.058155   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:05.093052   51251 cri.go:89] found id: ""
	I1018 17:43:05.093079   51251 logs.go:282] 0 containers: []
	W1018 17:43:05.093088   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:05.093096   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:05.093200   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:05.124045   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:05.124115   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:05.124134   51251 cri.go:89] found id: ""
	I1018 17:43:05.124156   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:05.124238   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:05.129085   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:05.134571   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:05.134649   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:05.162401   51251 cri.go:89] found id: ""
	I1018 17:43:05.162423   51251 logs.go:282] 0 containers: []
	W1018 17:43:05.162432   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:05.162439   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:05.162505   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:05.191429   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:05.191451   51251 cri.go:89] found id: ""
	I1018 17:43:05.191459   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:05.191513   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:05.195222   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:05.195291   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:05.233765   51251 cri.go:89] found id: ""
	I1018 17:43:05.233789   51251 logs.go:282] 0 containers: []
	W1018 17:43:05.233797   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:05.233813   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:05.233824   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:05.314015   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:05.314049   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:05.343775   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:05.343799   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:05.447678   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:05.447715   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:05.461224   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:05.461251   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:05.531644   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:05.521503    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.523802    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.525607    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.526297    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.527849    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:05.521503    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.523802    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.525607    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.526297    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.527849    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:05.531668   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:05.531681   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:05.589572   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:05.589609   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:05.620844   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:05.620871   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:05.649833   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:05.649861   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:05.702301   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:05.702335   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:05.746579   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:05.746612   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:08.279428   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:08.290505   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:08.290572   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:08.323196   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:08.323217   51251 cri.go:89] found id: ""
	I1018 17:43:08.323225   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:08.323287   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:08.326970   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:08.327042   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:08.353811   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:08.353833   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:08.353837   51251 cri.go:89] found id: ""
	I1018 17:43:08.353845   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:08.353903   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:08.357796   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:08.361798   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:08.361874   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:08.390063   51251 cri.go:89] found id: ""
	I1018 17:43:08.390086   51251 logs.go:282] 0 containers: []
	W1018 17:43:08.390094   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:08.390104   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:08.390164   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:08.417117   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:08.417137   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:08.417142   51251 cri.go:89] found id: ""
	I1018 17:43:08.417153   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:08.417209   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:08.421291   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:08.424803   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:08.424875   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:08.450383   51251 cri.go:89] found id: ""
	I1018 17:43:08.450405   51251 logs.go:282] 0 containers: []
	W1018 17:43:08.450412   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:08.450419   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:08.450517   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:08.475291   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:08.475312   51251 cri.go:89] found id: ""
	I1018 17:43:08.475321   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:08.475376   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:08.479043   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:08.479113   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:08.509786   51251 cri.go:89] found id: ""
	I1018 17:43:08.509809   51251 logs.go:282] 0 containers: []
	W1018 17:43:08.509817   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:08.509826   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:08.509838   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:08.605996   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:08.606031   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:08.622166   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:08.622201   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:08.702891   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:08.692116    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.693186    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.694251    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.694895    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.697165    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:08.692116    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.693186    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.694251    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.694895    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.697165    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:08.702955   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:08.702973   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:08.732447   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:08.732474   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:08.759641   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:08.759667   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:08.790348   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:08.790378   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:08.821468   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:08.821493   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:08.873070   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:08.873109   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:08.906030   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:08.906070   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:08.964907   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:08.964966   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:11.547663   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:11.559867   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:11.559932   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:11.595124   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:11.595143   51251 cri.go:89] found id: ""
	I1018 17:43:11.595151   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:11.595209   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.599553   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:11.599619   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:11.639738   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:11.639820   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:11.639844   51251 cri.go:89] found id: ""
	I1018 17:43:11.639865   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:11.639950   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.646442   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.651648   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:11.651787   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:11.695203   51251 cri.go:89] found id: ""
	I1018 17:43:11.695286   51251 logs.go:282] 0 containers: []
	W1018 17:43:11.695316   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:11.695337   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:11.695418   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:11.744347   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:11.744416   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:11.744441   51251 cri.go:89] found id: ""
	I1018 17:43:11.744463   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:11.744558   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.751191   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.755958   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:11.756105   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:11.791266   51251 cri.go:89] found id: ""
	I1018 17:43:11.791331   51251 logs.go:282] 0 containers: []
	W1018 17:43:11.791353   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:11.791383   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:11.791474   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:11.834876   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:11.834963   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:11.834989   51251 cri.go:89] found id: ""
	I1018 17:43:11.835011   51251 logs.go:282] 2 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:11.835086   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.841198   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.846580   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:11.846715   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:11.897749   51251 cri.go:89] found id: ""
	I1018 17:43:11.897822   51251 logs.go:282] 0 containers: []
	W1018 17:43:11.897846   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:11.897881   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:11.897928   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:11.943452   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:11.943536   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:12.005227   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:12.005338   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:12.062557   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:12.062624   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:12.182021   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:12.182095   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:12.197845   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:12.197920   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:12.260741   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:12.260817   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:12.335387   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:12.335466   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:12.369750   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:12.369775   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:12.449888   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:12.449923   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:12.545478   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:12.535379    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.536014    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.539746    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.540245    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.541774    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:12.535379    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.536014    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.539746    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.540245    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.541774    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:12.545496   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:12.545509   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:12.577372   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:12.577397   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:15.116790   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:15.132080   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:15.132161   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:15.159487   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:15.159506   51251 cri.go:89] found id: ""
	I1018 17:43:15.159515   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:15.159567   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.163178   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:15.163272   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:15.191277   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:15.191296   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:15.191300   51251 cri.go:89] found id: ""
	I1018 17:43:15.191315   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:15.191372   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.195019   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.198423   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:15.198491   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:15.225886   51251 cri.go:89] found id: ""
	I1018 17:43:15.225910   51251 logs.go:282] 0 containers: []
	W1018 17:43:15.225919   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:15.225925   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:15.225986   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:15.251392   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:15.251414   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:15.251419   51251 cri.go:89] found id: ""
	I1018 17:43:15.251426   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:15.251480   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.255201   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.258787   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:15.258880   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:15.285767   51251 cri.go:89] found id: ""
	I1018 17:43:15.285831   51251 logs.go:282] 0 containers: []
	W1018 17:43:15.285854   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:15.285878   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:15.285951   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:15.316160   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:15.316219   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:15.316239   51251 cri.go:89] found id: ""
	I1018 17:43:15.316261   51251 logs.go:282] 2 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:15.316333   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.320128   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.323596   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:15.323665   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:15.349496   51251 cri.go:89] found id: ""
	I1018 17:43:15.349522   51251 logs.go:282] 0 containers: []
	W1018 17:43:15.349531   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:15.349541   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:15.349569   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:15.420881   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:15.420916   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:15.451259   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:15.451285   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:15.548698   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:15.548740   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:15.561517   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:15.561546   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:15.608036   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:15.608071   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:15.641405   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:15.641431   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:15.668198   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:15.668226   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:15.694563   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:15.694591   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:15.770902   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:15.770936   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:15.836895   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:15.828987    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.829667    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.831325    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.831865    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.833343    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:15.828987    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.829667    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.831325    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.831865    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.833343    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:15.836919   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:15.836931   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:15.865888   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:15.865916   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:18.408468   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:18.419326   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:18.419393   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:18.443753   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:18.443775   51251 cri.go:89] found id: ""
	I1018 17:43:18.443783   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:18.443839   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.447404   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:18.447481   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:18.473566   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:18.473627   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:18.473639   51251 cri.go:89] found id: ""
	I1018 17:43:18.473647   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:18.473702   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.477524   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.481293   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:18.481397   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:18.507887   51251 cri.go:89] found id: ""
	I1018 17:43:18.507965   51251 logs.go:282] 0 containers: []
	W1018 17:43:18.507991   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:18.508011   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:18.508082   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:18.534789   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:18.534809   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:18.534814   51251 cri.go:89] found id: ""
	I1018 17:43:18.534821   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:18.534876   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.538531   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.542059   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:18.542133   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:18.567277   51251 cri.go:89] found id: ""
	I1018 17:43:18.567299   51251 logs.go:282] 0 containers: []
	W1018 17:43:18.567307   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:18.567316   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:18.567375   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:18.593882   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:18.593902   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:18.593907   51251 cri.go:89] found id: ""
	I1018 17:43:18.593914   51251 logs.go:282] 2 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:18.593971   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.598057   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.601482   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:18.601548   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:18.626724   51251 cri.go:89] found id: ""
	I1018 17:43:18.626748   51251 logs.go:282] 0 containers: []
	W1018 17:43:18.626756   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:18.626766   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:18.626777   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:18.720186   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:18.720220   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:18.732342   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:18.732372   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:18.777781   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:18.777813   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:18.814519   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:18.814548   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:18.842102   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:18.842129   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:18.870191   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:18.870215   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:18.940137   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:18.931877    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.932545    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.934242    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.934870    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.936368    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:18.931877    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.932545    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.934242    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.934870    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.936368    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:18.940159   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:18.940171   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:18.972118   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:18.972143   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:19.028698   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:19.028731   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:19.053561   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:19.053588   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:19.134177   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:19.134210   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:21.666074   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:21.677905   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:21.677982   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:21.710449   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:21.710470   51251 cri.go:89] found id: ""
	I1018 17:43:21.710479   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:21.710534   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.714253   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:21.714326   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:21.741478   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:21.741547   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:21.741558   51251 cri.go:89] found id: ""
	I1018 17:43:21.741566   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:21.741627   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.745535   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.750022   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:21.750140   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:21.780635   51251 cri.go:89] found id: ""
	I1018 17:43:21.780708   51251 logs.go:282] 0 containers: []
	W1018 17:43:21.780731   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:21.780778   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:21.780856   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:21.808496   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:21.808514   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:21.808518   51251 cri.go:89] found id: ""
	I1018 17:43:21.808525   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:21.808582   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.812401   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.815810   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:21.815876   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:21.845624   51251 cri.go:89] found id: ""
	I1018 17:43:21.845657   51251 logs.go:282] 0 containers: []
	W1018 17:43:21.845665   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:21.845672   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:21.845731   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:21.871314   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:21.871332   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:21.871336   51251 cri.go:89] found id: ""
	I1018 17:43:21.871343   51251 logs.go:282] 2 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:21.871399   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.875259   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.878771   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:21.878839   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:21.913289   51251 cri.go:89] found id: ""
	I1018 17:43:21.913312   51251 logs.go:282] 0 containers: []
	W1018 17:43:21.913321   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:21.913330   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:21.913341   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:21.990540   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:21.990577   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:22.023215   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:22.023243   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:22.053561   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:22.053588   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:22.081164   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:22.081191   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:22.145177   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:22.145212   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:22.184829   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:22.184859   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:22.228057   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:22.228081   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:22.316019   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:22.316053   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:22.347876   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:22.347901   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:22.450507   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:22.450541   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:22.462429   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:22.462456   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:22.536495   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:22.527657    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.528744    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.530446    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.531068    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.532737    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:22.527657    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.528744    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.530446    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.531068    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.532737    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:25.036723   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:25.048068   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:25.048137   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:25.074496   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:25.074517   51251 cri.go:89] found id: ""
	I1018 17:43:25.074525   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:25.074581   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:25.078699   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:25.078775   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:25.106068   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:25.106088   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:25.106092   51251 cri.go:89] found id: ""
	I1018 17:43:25.106099   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:25.106154   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:25.109911   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:25.116299   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:25.116392   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:25.152465   51251 cri.go:89] found id: ""
	I1018 17:43:25.152545   51251 logs.go:282] 0 containers: []
	W1018 17:43:25.152568   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:25.152587   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:25.152679   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:25.179667   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:25.179690   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:25.179695   51251 cri.go:89] found id: ""
	I1018 17:43:25.179703   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:25.179762   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:25.183571   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:25.187316   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:25.187431   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:25.216762   51251 cri.go:89] found id: ""
	I1018 17:43:25.216796   51251 logs.go:282] 0 containers: []
	W1018 17:43:25.216805   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:25.216812   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:25.216871   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:25.244556   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:25.244578   51251 cri.go:89] found id: ""
	I1018 17:43:25.244587   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:25.244642   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:25.248407   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:25.248485   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:25.274854   51251 cri.go:89] found id: ""
	I1018 17:43:25.274879   51251 logs.go:282] 0 containers: []
	W1018 17:43:25.274888   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:25.274897   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:25.274908   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:25.331118   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:25.331153   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:25.411446   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:25.411478   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:25.462440   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:25.462467   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:25.525297   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:25.525373   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:25.555066   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:25.555092   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:25.581528   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:25.581558   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:25.682424   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:25.682461   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:25.695456   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:25.695486   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:25.766142   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:25.757215    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.757999    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.759442    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.759856    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.761265    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:25.757215    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.757999    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.759442    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.759856    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.761265    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:25.766162   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:25.766174   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:25.795404   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:25.795433   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:28.337726   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:28.348255   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:28.348338   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:28.382821   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:28.382841   51251 cri.go:89] found id: ""
	I1018 17:43:28.382849   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:28.382903   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:28.386571   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:28.386653   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:28.418956   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:28.418976   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:28.418981   51251 cri.go:89] found id: ""
	I1018 17:43:28.418988   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:28.419041   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:28.422637   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:28.426047   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:28.426115   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:28.450805   51251 cri.go:89] found id: ""
	I1018 17:43:28.450826   51251 logs.go:282] 0 containers: []
	W1018 17:43:28.450834   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:28.450841   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:28.450897   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:28.476049   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:28.476069   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:28.476075   51251 cri.go:89] found id: ""
	I1018 17:43:28.476083   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:28.476137   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:28.479674   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:28.483214   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:28.483280   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:28.509438   51251 cri.go:89] found id: ""
	I1018 17:43:28.509460   51251 logs.go:282] 0 containers: []
	W1018 17:43:28.509468   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:28.509475   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:28.509531   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:28.536762   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:28.536783   51251 cri.go:89] found id: ""
	I1018 17:43:28.536791   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:28.536846   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:28.540786   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:28.540849   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:28.566044   51251 cri.go:89] found id: ""
	I1018 17:43:28.566066   51251 logs.go:282] 0 containers: []
	W1018 17:43:28.566076   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:28.566085   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:28.566126   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:28.668507   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:28.668548   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:28.696140   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:28.696166   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:28.742992   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:28.743028   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:28.773720   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:28.773749   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:28.800871   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:28.800897   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:28.812516   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:28.812544   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:28.881394   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:28.872850    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.873551    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.875119    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.875694    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.877437    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:28.872850    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.873551    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.875119    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.875694    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.877437    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:28.881466   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:28.881493   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:28.920319   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:28.920351   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:29.001463   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:29.001501   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:29.080673   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:29.080705   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:31.615872   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:31.627104   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:31.627173   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:31.652790   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:31.652812   51251 cri.go:89] found id: ""
	I1018 17:43:31.652820   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:31.652880   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:31.656835   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:31.656905   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:31.684663   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:31.684685   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:31.684690   51251 cri.go:89] found id: ""
	I1018 17:43:31.684698   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:31.684752   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:31.688556   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:31.692271   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:31.692343   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:31.720037   51251 cri.go:89] found id: ""
	I1018 17:43:31.720059   51251 logs.go:282] 0 containers: []
	W1018 17:43:31.720067   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:31.720074   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:31.720130   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:31.745058   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:31.745078   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:31.745083   51251 cri.go:89] found id: ""
	I1018 17:43:31.745090   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:31.745144   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:31.748688   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:31.752002   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:31.752068   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:31.780253   51251 cri.go:89] found id: ""
	I1018 17:43:31.780275   51251 logs.go:282] 0 containers: []
	W1018 17:43:31.780283   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:31.780289   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:31.780346   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:31.806333   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:31.806358   51251 cri.go:89] found id: ""
	I1018 17:43:31.806365   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:31.806429   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:31.810331   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:31.810403   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:31.836140   51251 cri.go:89] found id: ""
	I1018 17:43:31.836205   51251 logs.go:282] 0 containers: []
	W1018 17:43:31.836227   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:31.836250   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:31.836292   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:31.874437   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:31.874512   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:31.901146   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:31.901171   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:31.998418   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:31.998452   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:32.014569   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:32.014606   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:32.063231   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:32.063266   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:32.130021   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:32.130061   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:32.160724   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:32.160761   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:32.239135   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:32.239173   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:32.285504   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:32.285531   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:32.361004   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:32.352916    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.353683    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.355270    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.355600    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.357143    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:32.352916    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.353683    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.355270    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.355600    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.357143    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:32.361029   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:32.361042   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:34.888854   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:34.901112   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:34.901187   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:34.929962   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:34.929982   51251 cri.go:89] found id: ""
	I1018 17:43:34.929990   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:34.930044   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:34.933771   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:34.933840   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:34.974958   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:34.974990   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:34.974994   51251 cri.go:89] found id: ""
	I1018 17:43:34.975002   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:34.975063   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:34.979007   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:34.982588   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:34.982669   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:35.025772   51251 cri.go:89] found id: ""
	I1018 17:43:35.025794   51251 logs.go:282] 0 containers: []
	W1018 17:43:35.025802   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:35.025808   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:35.025867   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:35.054583   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:35.054606   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:35.054611   51251 cri.go:89] found id: ""
	I1018 17:43:35.054619   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:35.054683   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:35.058624   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:35.062166   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:35.062249   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:35.099459   51251 cri.go:89] found id: ""
	I1018 17:43:35.099482   51251 logs.go:282] 0 containers: []
	W1018 17:43:35.099490   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:35.099497   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:35.099553   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:35.135905   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:35.135927   51251 cri.go:89] found id: ""
	I1018 17:43:35.135936   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:35.135993   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:35.139558   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:35.139675   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:35.167854   51251 cri.go:89] found id: ""
	I1018 17:43:35.167877   51251 logs.go:282] 0 containers: []
	W1018 17:43:35.167886   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:35.167895   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:35.167906   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:35.268911   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:35.268953   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:35.351239   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:35.342070    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.342707    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.344447    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.345185    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.346039    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:35.342070    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.342707    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.344447    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.345185    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.346039    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:35.351259   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:35.351271   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:35.414894   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:35.414928   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:35.449804   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:35.449834   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:35.506409   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:35.506445   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:35.595870   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:35.595911   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:35.608335   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:35.608364   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:35.639546   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:35.639574   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:35.667961   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:35.667987   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:35.698739   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:35.698763   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:38.237278   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:38.248092   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:38.248161   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:38.274867   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:38.274888   51251 cri.go:89] found id: ""
	I1018 17:43:38.274896   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:38.274965   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:38.278707   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:38.278774   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:38.304232   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:38.304252   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:38.304256   51251 cri.go:89] found id: ""
	I1018 17:43:38.304264   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:38.304317   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:38.309670   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:38.313425   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:38.313497   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:38.344118   51251 cri.go:89] found id: ""
	I1018 17:43:38.344140   51251 logs.go:282] 0 containers: []
	W1018 17:43:38.344149   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:38.344156   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:38.344214   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:38.376271   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:38.376294   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:38.376298   51251 cri.go:89] found id: ""
	I1018 17:43:38.376316   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:38.376373   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:38.380454   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:38.384255   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:38.384326   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:38.409931   51251 cri.go:89] found id: ""
	I1018 17:43:38.409955   51251 logs.go:282] 0 containers: []
	W1018 17:43:38.409963   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:38.409977   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:38.410038   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:38.436568   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:38.436591   51251 cri.go:89] found id: ""
	I1018 17:43:38.436600   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:38.436672   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:38.440383   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:38.440477   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:38.468084   51251 cri.go:89] found id: ""
	I1018 17:43:38.468161   51251 logs.go:282] 0 containers: []
	W1018 17:43:38.468184   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:38.468206   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:38.468228   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:38.565168   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:38.565204   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:38.577269   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:38.577297   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:38.646729   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:38.638445    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.639186    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.640793    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.641395    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.643175    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:38.638445    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.639186    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.640793    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.641395    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.643175    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:38.646754   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:38.646768   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:38.673481   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:38.673507   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:38.719835   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:38.719871   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:38.752322   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:38.752362   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:38.783579   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:38.783606   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:38.820293   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:38.820322   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:38.878730   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:38.878761   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:38.907670   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:38.907740   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:41.489854   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:41.500771   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:41.500872   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:41.526674   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:41.526696   51251 cri.go:89] found id: ""
	I1018 17:43:41.526706   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:41.526770   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:41.531078   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:41.531191   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:41.562796   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:41.562823   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:41.562829   51251 cri.go:89] found id: ""
	I1018 17:43:41.562837   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:41.562959   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:41.566913   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:41.570998   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:41.571118   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:41.597622   51251 cri.go:89] found id: ""
	I1018 17:43:41.597647   51251 logs.go:282] 0 containers: []
	W1018 17:43:41.597655   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:41.597662   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:41.597720   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:41.627549   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:41.627570   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:41.627575   51251 cri.go:89] found id: ""
	I1018 17:43:41.627583   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:41.627642   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:41.631299   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:41.635563   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:41.635662   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:41.662146   51251 cri.go:89] found id: ""
	I1018 17:43:41.662170   51251 logs.go:282] 0 containers: []
	W1018 17:43:41.662179   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:41.662185   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:41.662244   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:41.693012   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:41.693038   51251 cri.go:89] found id: ""
	I1018 17:43:41.693047   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:41.693132   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:41.697195   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:41.697265   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:41.729826   51251 cri.go:89] found id: ""
	I1018 17:43:41.729850   51251 logs.go:282] 0 containers: []
	W1018 17:43:41.729859   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:41.729869   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:41.729880   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:41.828078   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:41.828110   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:41.901435   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:41.892987    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.893726    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.895255    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.895832    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.897510    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:41.892987    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.893726    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.895255    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.895832    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.897510    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:41.901459   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:41.901472   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:41.929914   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:41.929989   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:41.987757   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:41.987802   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:42.039791   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:42.039830   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:42.075456   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:42.075487   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:42.149099   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:42.149132   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:42.164617   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:42.164650   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:42.257289   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:42.257327   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:42.287081   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:42.287112   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:44.874333   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:44.884870   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:44.884968   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:44.912153   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:44.912175   51251 cri.go:89] found id: ""
	I1018 17:43:44.912183   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:44.912237   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:44.915849   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:44.915919   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:44.942584   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:44.942604   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:44.942609   51251 cri.go:89] found id: ""
	I1018 17:43:44.942616   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:44.942668   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:44.946463   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:44.949841   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:44.949907   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:44.986621   51251 cri.go:89] found id: ""
	I1018 17:43:44.986646   51251 logs.go:282] 0 containers: []
	W1018 17:43:44.986654   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:44.986661   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:44.986718   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:45.029811   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:45.029830   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:45.029835   51251 cri.go:89] found id: ""
	I1018 17:43:45.029843   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:45.029908   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:45.035692   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:45.040000   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:45.040078   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:45.098723   51251 cri.go:89] found id: ""
	I1018 17:43:45.098751   51251 logs.go:282] 0 containers: []
	W1018 17:43:45.098760   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:45.098770   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:45.098843   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:45.162198   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:45.162228   51251 cri.go:89] found id: ""
	I1018 17:43:45.162238   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:45.162307   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:45.167619   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:45.167700   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:45.211984   51251 cri.go:89] found id: ""
	I1018 17:43:45.212008   51251 logs.go:282] 0 containers: []
	W1018 17:43:45.212018   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:45.212028   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:45.212041   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:45.226821   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:45.226851   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:45.337585   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:45.321955    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.322823    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.324086    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.327115    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.329027    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:45.321955    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.322823    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.324086    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.327115    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.329027    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:45.337625   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:45.337641   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:45.377460   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:45.377491   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:45.429187   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:45.429222   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:45.457994   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:45.458022   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:45.540761   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:45.540797   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:45.573633   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:45.573662   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:45.672580   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:45.672617   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:45.706688   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:45.706720   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:45.783083   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:45.783120   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:48.314260   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:48.324891   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:48.324985   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:48.357904   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:48.357927   51251 cri.go:89] found id: ""
	I1018 17:43:48.357940   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:48.357997   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:48.362392   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:48.362474   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:48.397905   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:48.397927   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:48.397932   51251 cri.go:89] found id: ""
	I1018 17:43:48.397940   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:48.397993   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:48.401719   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:48.404922   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:48.405019   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:48.431573   51251 cri.go:89] found id: ""
	I1018 17:43:48.431598   51251 logs.go:282] 0 containers: []
	W1018 17:43:48.431606   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:48.431613   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:48.431673   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:48.458728   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:48.458755   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:48.458760   51251 cri.go:89] found id: ""
	I1018 17:43:48.458767   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:48.458824   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:48.462488   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:48.465841   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:48.465909   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:48.491719   51251 cri.go:89] found id: ""
	I1018 17:43:48.491741   51251 logs.go:282] 0 containers: []
	W1018 17:43:48.491749   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:48.491755   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:48.491815   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:48.522124   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:48.522189   51251 cri.go:89] found id: ""
	I1018 17:43:48.522211   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:48.522292   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:48.526320   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:48.526407   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:48.552413   51251 cri.go:89] found id: ""
	I1018 17:43:48.552436   51251 logs.go:282] 0 containers: []
	W1018 17:43:48.552445   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:48.552454   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:48.552471   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:48.647083   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:48.647114   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:48.660735   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:48.660768   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:48.690812   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:48.690837   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:48.721178   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:48.721208   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:48.748549   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:48.748617   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:48.823598   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:48.823637   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:48.855654   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:48.855680   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:48.931642   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:48.922606    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.923296    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.925195    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.925885    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.928154    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:48.922606    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.923296    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.925195    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.925885    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.928154    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:48.931664   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:48.931678   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:48.984964   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:48.985003   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:49.022359   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:49.022391   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:51.581690   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:51.592535   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:51.592618   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:51.621442   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:51.621470   51251 cri.go:89] found id: ""
	I1018 17:43:51.621479   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:51.621535   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:51.625435   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:51.625513   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:51.653328   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:51.653354   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:51.653360   51251 cri.go:89] found id: ""
	I1018 17:43:51.653367   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:51.653425   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:51.657372   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:51.660911   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:51.661083   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:51.687435   51251 cri.go:89] found id: ""
	I1018 17:43:51.687456   51251 logs.go:282] 0 containers: []
	W1018 17:43:51.687465   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:51.687472   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:51.687533   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:51.716167   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:51.716189   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:51.716194   51251 cri.go:89] found id: ""
	I1018 17:43:51.716201   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:51.716256   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:51.719950   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:51.723494   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:51.723575   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:51.752147   51251 cri.go:89] found id: ""
	I1018 17:43:51.752171   51251 logs.go:282] 0 containers: []
	W1018 17:43:51.752180   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:51.752186   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:51.752245   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:51.779213   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:51.779236   51251 cri.go:89] found id: ""
	I1018 17:43:51.779244   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:51.779305   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:51.782913   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:51.782986   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:51.810202   51251 cri.go:89] found id: ""
	I1018 17:43:51.810228   51251 logs.go:282] 0 containers: []
	W1018 17:43:51.810236   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:51.810246   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:51.810258   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:51.824029   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:51.824058   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:51.894919   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:51.886698    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.887712    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.889389    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.889843    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.891356    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:51.886698    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.887712    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.889389    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.889843    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.891356    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:51.894983   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:51.895002   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:51.955232   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:51.955263   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:51.990622   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:51.990651   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:52.020376   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:52.020405   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:52.066713   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:52.066740   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:52.172061   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:52.172103   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:52.214913   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:52.214938   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:52.251763   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:52.251854   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:52.311510   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:52.311541   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:54.894390   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:54.907290   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:54.907366   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:54.940172   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:54.940196   51251 cri.go:89] found id: ""
	I1018 17:43:54.940204   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:54.940260   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:54.943992   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:54.944086   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:54.978188   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:54.978210   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:54.978214   51251 cri.go:89] found id: ""
	I1018 17:43:54.978222   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:54.978282   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:54.982194   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:54.986022   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:54.986121   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:55.029209   51251 cri.go:89] found id: ""
	I1018 17:43:55.029239   51251 logs.go:282] 0 containers: []
	W1018 17:43:55.029248   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:55.029256   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:55.029318   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:55.057246   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:55.057271   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:55.057276   51251 cri.go:89] found id: ""
	I1018 17:43:55.057283   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:55.057336   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:55.061051   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:55.064367   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:55.064436   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:55.095243   51251 cri.go:89] found id: ""
	I1018 17:43:55.095307   51251 logs.go:282] 0 containers: []
	W1018 17:43:55.095329   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:55.095341   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:55.095399   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:55.122785   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:55.122804   51251 cri.go:89] found id: ""
	I1018 17:43:55.122813   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:55.122876   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:55.132639   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:55.132738   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:55.162942   51251 cri.go:89] found id: ""
	I1018 17:43:55.162977   51251 logs.go:282] 0 containers: []
	W1018 17:43:55.162986   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:55.163011   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:55.163032   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:55.228280   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:55.228312   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:55.259473   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:55.259500   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:55.292185   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:55.292220   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:55.341717   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:55.341749   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:55.375698   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:55.375727   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:55.402916   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:55.402942   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:55.490846   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:55.490886   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:55.587437   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:55.587478   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:55.600254   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:55.600280   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:55.666266   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:55.657772    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.658733    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.660294    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.660924    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.662498    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:55.657772    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.658733    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.660294    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.660924    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.662498    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:55.666289   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:55.666311   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:58.191608   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:58.207197   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:58.207266   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:58.241572   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:58.241593   51251 cri.go:89] found id: ""
	I1018 17:43:58.241602   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:58.241656   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:58.245301   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:58.245380   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:58.275809   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:58.275830   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:58.275835   51251 cri.go:89] found id: ""
	I1018 17:43:58.275842   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:58.275898   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:58.279806   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:58.283389   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:58.283459   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:58.312440   51251 cri.go:89] found id: ""
	I1018 17:43:58.312464   51251 logs.go:282] 0 containers: []
	W1018 17:43:58.312472   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:58.312479   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:58.312535   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:58.341315   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:58.341341   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:58.341346   51251 cri.go:89] found id: ""
	I1018 17:43:58.341354   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:58.341418   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:58.345155   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:58.348837   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:58.348906   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:58.375741   51251 cri.go:89] found id: ""
	I1018 17:43:58.375811   51251 logs.go:282] 0 containers: []
	W1018 17:43:58.375843   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:58.375861   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:58.375951   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:58.402340   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:58.402361   51251 cri.go:89] found id: ""
	I1018 17:43:58.402369   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:58.402424   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:58.406046   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:58.406112   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:58.430628   51251 cri.go:89] found id: ""
	I1018 17:43:58.430701   51251 logs.go:282] 0 containers: []
	W1018 17:43:58.430717   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:58.430727   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:58.430737   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:58.524428   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:58.524462   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:58.581885   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:58.581916   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:58.611949   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:58.611979   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:58.693414   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:58.693450   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:58.705470   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:58.705496   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:58.771817   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:58.763821    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.764175    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.765665    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.766083    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.767558    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:58.763821    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.764175    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.765665    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.766083    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.767558    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:58.771836   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:58.771847   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:58.798225   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:58.798252   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:58.848969   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:58.849000   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:58.887826   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:58.887856   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:58.914297   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:58.914322   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:01.448548   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:01.459433   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:01.459507   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:01.490534   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:01.490566   51251 cri.go:89] found id: ""
	I1018 17:44:01.490575   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:01.490649   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:01.494451   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:01.494547   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:01.522081   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:01.522104   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:01.522109   51251 cri.go:89] found id: ""
	I1018 17:44:01.522117   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:01.522175   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:01.526069   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:01.529977   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:01.530054   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:01.557411   51251 cri.go:89] found id: ""
	I1018 17:44:01.557433   51251 logs.go:282] 0 containers: []
	W1018 17:44:01.557442   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:01.557448   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:01.557508   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:01.585118   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:01.585142   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:01.585147   51251 cri.go:89] found id: ""
	I1018 17:44:01.585155   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:01.585218   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:01.588900   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:01.592735   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:01.592820   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:01.621026   51251 cri.go:89] found id: ""
	I1018 17:44:01.621098   51251 logs.go:282] 0 containers: []
	W1018 17:44:01.621121   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:01.621140   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:01.621227   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:01.649479   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:01.649503   51251 cri.go:89] found id: ""
	I1018 17:44:01.649512   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:01.649576   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:01.653509   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:01.653601   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:01.680380   51251 cri.go:89] found id: ""
	I1018 17:44:01.680405   51251 logs.go:282] 0 containers: []
	W1018 17:44:01.680413   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:01.680445   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:01.680470   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:01.719413   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:01.719445   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:01.778065   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:01.778113   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:01.863062   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:01.863098   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:01.933290   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:01.925181    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.926041    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.926645    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.928011    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.928516    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:01.925181    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.926041    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.926645    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.928011    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.928516    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:01.933312   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:01.933325   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:01.994141   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:01.994175   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:02.027406   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:02.027433   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:02.058305   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:02.058374   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:02.089161   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:02.089238   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:02.197504   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:02.197547   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:02.220679   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:02.220704   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:04.749655   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:04.761329   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:04.761399   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:04.791310   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:04.791330   51251 cri.go:89] found id: ""
	I1018 17:44:04.791338   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:04.791391   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:04.795236   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:04.795315   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:04.826977   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:04.826999   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:04.827004   51251 cri.go:89] found id: ""
	I1018 17:44:04.827012   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:04.827071   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:04.831056   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:04.834547   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:04.834619   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:04.861994   51251 cri.go:89] found id: ""
	I1018 17:44:04.862019   51251 logs.go:282] 0 containers: []
	W1018 17:44:04.862028   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:04.862036   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:04.862093   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:04.891547   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:04.891568   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:04.891573   51251 cri.go:89] found id: ""
	I1018 17:44:04.891580   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:04.891664   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:04.895286   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:04.898803   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:04.898879   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:04.925892   51251 cri.go:89] found id: ""
	I1018 17:44:04.925917   51251 logs.go:282] 0 containers: []
	W1018 17:44:04.925925   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:04.925932   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:04.925992   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:04.950898   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:04.950920   51251 cri.go:89] found id: ""
	I1018 17:44:04.950937   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:04.950992   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:04.954458   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:04.954524   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:04.985795   51251 cri.go:89] found id: ""
	I1018 17:44:04.985818   51251 logs.go:282] 0 containers: []
	W1018 17:44:04.985826   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:04.985845   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:04.985857   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:05.039846   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:05.039880   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:05.074700   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:05.074733   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:05.123696   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:05.123722   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:05.162141   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:05.162168   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:05.233397   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:05.233431   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:05.260751   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:05.260780   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:05.342549   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:05.342585   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:05.374809   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:05.374833   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:05.480225   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:05.480260   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:05.492409   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:05.492433   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:05.563815   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:05.554079    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.554775    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.556564    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.557183    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.558926    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:05.554079    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.554775    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.556564    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.557183    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.558926    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:08.065115   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:08.076338   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:08.076434   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:08.104997   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:08.105072   51251 cri.go:89] found id: ""
	I1018 17:44:08.105096   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:08.105171   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:08.109342   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:08.109473   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:08.142036   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:08.142059   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:08.142063   51251 cri.go:89] found id: ""
	I1018 17:44:08.142071   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:08.142127   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:08.145811   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:08.149071   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:08.149138   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:08.178455   51251 cri.go:89] found id: ""
	I1018 17:44:08.178476   51251 logs.go:282] 0 containers: []
	W1018 17:44:08.178485   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:08.178491   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:08.178547   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:08.211837   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:08.211858   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:08.211862   51251 cri.go:89] found id: ""
	I1018 17:44:08.211871   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:08.211926   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:08.215306   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:08.218688   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:08.218753   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:08.245955   51251 cri.go:89] found id: ""
	I1018 17:44:08.245978   51251 logs.go:282] 0 containers: []
	W1018 17:44:08.245987   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:08.245994   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:08.246072   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:08.277970   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:08.277992   51251 cri.go:89] found id: ""
	I1018 17:44:08.278011   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:08.278083   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:08.281866   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:08.281956   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:08.314813   51251 cri.go:89] found id: ""
	I1018 17:44:08.314835   51251 logs.go:282] 0 containers: []
	W1018 17:44:08.314844   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:08.314853   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:08.314888   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:08.326805   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:08.326836   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:08.360439   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:08.360467   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:08.388919   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:08.388973   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:08.486321   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:08.486351   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:08.552337   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:08.544684    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.545314    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.546893    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.547374    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.548846    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:08.544684    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.545314    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.546893    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.547374    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.548846    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:08.552356   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:08.552369   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:08.577416   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:08.577441   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:08.629938   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:08.629973   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:08.689554   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:08.689585   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:08.719107   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:08.719132   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:08.799512   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:08.799588   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:11.341509   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:11.352018   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:11.352091   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:11.378915   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:11.378937   51251 cri.go:89] found id: ""
	I1018 17:44:11.378946   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:11.379001   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:11.382407   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:11.382471   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:11.407787   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:11.407806   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:11.407811   51251 cri.go:89] found id: ""
	I1018 17:44:11.407818   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:11.407902   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:11.411921   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:11.415171   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:11.415239   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:11.440964   51251 cri.go:89] found id: ""
	I1018 17:44:11.440986   51251 logs.go:282] 0 containers: []
	W1018 17:44:11.440995   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:11.441001   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:11.441056   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:11.470489   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:11.470512   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:11.470516   51251 cri.go:89] found id: ""
	I1018 17:44:11.470523   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:11.470579   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:11.474310   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:11.477884   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:11.477960   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:11.504799   51251 cri.go:89] found id: ""
	I1018 17:44:11.504862   51251 logs.go:282] 0 containers: []
	W1018 17:44:11.504885   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:11.504906   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:11.505006   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:11.533920   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:11.533983   51251 cri.go:89] found id: ""
	I1018 17:44:11.534003   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:11.534091   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:11.537702   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:11.537789   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:11.564923   51251 cri.go:89] found id: ""
	I1018 17:44:11.565058   51251 logs.go:282] 0 containers: []
	W1018 17:44:11.565068   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:11.565077   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:11.565089   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:11.576916   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:11.577027   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:11.644089   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:11.636599    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.637224    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.638751    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.639193    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.640642    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:11.636599    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.637224    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.638751    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.639193    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.640642    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:11.644109   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:11.644123   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:11.698636   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:11.698669   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:11.760923   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:11.760958   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:11.787821   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:11.787851   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:11.820451   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:11.820482   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:11.851416   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:11.851442   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:11.946634   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:11.946674   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:11.975802   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:11.975830   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:12.010031   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:12.010112   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:14.600286   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:14.611078   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:14.611145   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:14.638095   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:14.638116   51251 cri.go:89] found id: ""
	I1018 17:44:14.638124   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:14.638205   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:14.641787   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:14.641856   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:14.668881   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:14.668904   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:14.668910   51251 cri.go:89] found id: ""
	I1018 17:44:14.668918   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:14.669001   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:14.672474   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:14.675764   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:14.675840   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:14.699628   51251 cri.go:89] found id: ""
	I1018 17:44:14.699652   51251 logs.go:282] 0 containers: []
	W1018 17:44:14.699660   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:14.699666   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:14.699723   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:14.724155   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:14.724177   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:14.724182   51251 cri.go:89] found id: ""
	I1018 17:44:14.724190   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:14.724260   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:14.728073   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:14.731467   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:14.731534   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:14.757304   51251 cri.go:89] found id: ""
	I1018 17:44:14.757327   51251 logs.go:282] 0 containers: []
	W1018 17:44:14.757354   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:14.757361   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:14.757420   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:14.784778   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:14.784799   51251 cri.go:89] found id: ""
	I1018 17:44:14.784808   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:14.784862   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:14.788408   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:14.788477   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:14.819756   51251 cri.go:89] found id: ""
	I1018 17:44:14.819778   51251 logs.go:282] 0 containers: []
	W1018 17:44:14.819796   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:14.819805   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:14.819816   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:14.844668   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:14.844698   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:14.876534   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:14.876564   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:14.980256   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:14.980340   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:15.044346   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:15.044386   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:15.121677   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:15.121713   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:15.203393   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:15.203428   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:15.219368   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:15.219394   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:15.296726   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:15.289112    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.289522    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.291014    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.291333    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.292981    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:15.289112    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.289522    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.291014    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.291333    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.292981    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:15.296748   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:15.296761   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:15.322490   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:15.322516   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:15.364728   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:15.364760   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:17.892524   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:17.903413   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:17.903482   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:17.931967   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:17.931989   51251 cri.go:89] found id: ""
	I1018 17:44:17.931997   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:17.932052   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:17.935895   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:17.936007   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:17.983924   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:17.983945   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:17.983950   51251 cri.go:89] found id: ""
	I1018 17:44:17.983958   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:17.984014   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:17.987660   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:17.991127   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:17.991201   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:18.022803   51251 cri.go:89] found id: ""
	I1018 17:44:18.022827   51251 logs.go:282] 0 containers: []
	W1018 17:44:18.022836   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:18.022843   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:18.022906   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:18.064735   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:18.064754   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:18.064759   51251 cri.go:89] found id: ""
	I1018 17:44:18.064767   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:18.064823   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:18.068536   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:18.072878   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:18.072982   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:18.100206   51251 cri.go:89] found id: ""
	I1018 17:44:18.100237   51251 logs.go:282] 0 containers: []
	W1018 17:44:18.100246   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:18.100253   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:18.100321   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:18.127552   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:18.127575   51251 cri.go:89] found id: ""
	I1018 17:44:18.127584   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:18.127641   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:18.131667   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:18.131732   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:18.162707   51251 cri.go:89] found id: ""
	I1018 17:44:18.162731   51251 logs.go:282] 0 containers: []
	W1018 17:44:18.162739   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:18.162748   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:18.162763   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:18.246228   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:18.238684    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.239276    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.240721    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.241146    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.242608    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:18.238684    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.239276    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.240721    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.241146    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.242608    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:18.246250   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:18.246263   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:18.277740   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:18.277764   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:18.343394   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:18.343427   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:18.383823   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:18.383854   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:18.443389   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:18.443420   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:18.469522   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:18.469550   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:18.545455   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:18.545487   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:18.592352   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:18.592376   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:18.695698   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:18.695735   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:18.707163   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:18.707192   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:21.235420   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:21.245952   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:21.246019   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:21.271930   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:21.271997   51251 cri.go:89] found id: ""
	I1018 17:44:21.272019   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:21.272106   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:21.275968   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:21.276036   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:21.302979   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:21.302997   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:21.303001   51251 cri.go:89] found id: ""
	I1018 17:44:21.303008   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:21.303069   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:21.307879   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:21.311562   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:21.311627   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:21.339660   51251 cri.go:89] found id: ""
	I1018 17:44:21.339681   51251 logs.go:282] 0 containers: []
	W1018 17:44:21.339690   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:21.339695   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:21.339752   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:21.368389   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:21.368411   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:21.368416   51251 cri.go:89] found id: ""
	I1018 17:44:21.368424   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:21.368478   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:21.372383   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:21.375709   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:21.375779   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:21.401944   51251 cri.go:89] found id: ""
	I1018 17:44:21.402017   51251 logs.go:282] 0 containers: []
	W1018 17:44:21.402040   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:21.402058   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:21.402140   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:21.428284   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:21.428303   51251 cri.go:89] found id: ""
	I1018 17:44:21.428312   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:21.428392   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:21.432085   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:21.432163   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:21.456804   51251 cri.go:89] found id: ""
	I1018 17:44:21.456878   51251 logs.go:282] 0 containers: []
	W1018 17:44:21.456899   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:21.456922   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:21.456987   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:21.530466   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:21.522476    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.523226    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.524791    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.525409    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.526934    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:21.522476    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.523226    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.524791    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.525409    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.526934    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:21.530487   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:21.530500   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:21.583954   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:21.583988   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:21.624634   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:21.624667   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:21.683522   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:21.683555   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:21.712030   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:21.712058   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:21.743203   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:21.743227   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:21.823114   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:21.823149   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:21.854521   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:21.854548   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:21.957239   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:21.957276   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:21.974988   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:21.975013   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:24.514740   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:24.525668   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:24.525738   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:24.553057   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:24.553087   51251 cri.go:89] found id: ""
	I1018 17:44:24.553096   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:24.553152   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:24.556981   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:24.557053   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:24.583773   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:24.583796   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:24.583801   51251 cri.go:89] found id: ""
	I1018 17:44:24.583809   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:24.583864   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:24.587649   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:24.591283   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:24.591388   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:24.617918   51251 cri.go:89] found id: ""
	I1018 17:44:24.617940   51251 logs.go:282] 0 containers: []
	W1018 17:44:24.617949   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:24.617959   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:24.618025   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:24.643293   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:24.643319   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:24.643323   51251 cri.go:89] found id: ""
	I1018 17:44:24.643331   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:24.643391   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:24.647045   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:24.650422   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:24.650491   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:24.676556   51251 cri.go:89] found id: ""
	I1018 17:44:24.676629   51251 logs.go:282] 0 containers: []
	W1018 17:44:24.676652   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:24.676670   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:24.676753   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:24.703335   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:24.703354   51251 cri.go:89] found id: ""
	I1018 17:44:24.703362   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:24.703413   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:24.707043   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:24.707112   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:24.736770   51251 cri.go:89] found id: ""
	I1018 17:44:24.736793   51251 logs.go:282] 0 containers: []
	W1018 17:44:24.736802   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:24.736811   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:24.736821   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:24.831690   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:24.831725   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:24.845067   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:24.845094   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:24.915666   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:24.907247    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.907870    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.909378    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.910211    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.911689    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:24.907247    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.907870    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.909378    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.910211    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.911689    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:24.915715   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:24.915728   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:24.980758   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:24.980794   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:25.013913   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:25.013944   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:25.095710   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:25.095746   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:25.136366   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:25.136395   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:25.167081   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:25.167108   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:25.217068   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:25.217106   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:25.250444   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:25.250477   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:27.778976   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:27.789442   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:27.789511   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:27.816188   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:27.816211   51251 cri.go:89] found id: ""
	I1018 17:44:27.816219   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:27.816273   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:27.819794   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:27.819867   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:27.846400   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:27.846433   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:27.846439   51251 cri.go:89] found id: ""
	I1018 17:44:27.846461   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:27.846546   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:27.850346   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:27.853879   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:27.853956   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:27.880448   51251 cri.go:89] found id: ""
	I1018 17:44:27.880471   51251 logs.go:282] 0 containers: []
	W1018 17:44:27.880480   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:27.880486   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:27.880549   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:27.908354   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:27.908384   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:27.908389   51251 cri.go:89] found id: ""
	I1018 17:44:27.908397   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:27.908454   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:27.913635   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:27.917518   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:27.917589   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:27.944652   51251 cri.go:89] found id: ""
	I1018 17:44:27.944674   51251 logs.go:282] 0 containers: []
	W1018 17:44:27.944683   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:27.944689   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:27.944749   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:27.978127   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:27.978150   51251 cri.go:89] found id: ""
	I1018 17:44:27.978158   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:27.978217   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:27.982028   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:27.982097   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:28.010364   51251 cri.go:89] found id: ""
	I1018 17:44:28.010395   51251 logs.go:282] 0 containers: []
	W1018 17:44:28.010405   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:28.010414   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:28.010426   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:28.113197   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:28.113275   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:28.143438   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:28.143464   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:28.193919   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:28.193956   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:28.233324   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:28.233364   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:28.315086   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:28.315121   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:28.327446   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:28.327472   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:28.403227   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:28.392160    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.393002    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.395106    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.395823    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.397363    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:28.392160    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.393002    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.395106    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.395823    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.397363    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:28.403250   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:28.403262   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:28.467992   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:28.468024   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:28.495923   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:28.495947   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:28.526646   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:28.526674   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:31.058337   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:31.069976   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:31.070050   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:31.101306   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:31.101328   51251 cri.go:89] found id: ""
	I1018 17:44:31.101336   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:31.101399   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:31.105055   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:31.105128   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:31.142563   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:31.142588   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:31.142593   51251 cri.go:89] found id: ""
	I1018 17:44:31.142600   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:31.142662   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:31.146604   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:31.150365   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:31.150435   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:31.176760   51251 cri.go:89] found id: ""
	I1018 17:44:31.176785   51251 logs.go:282] 0 containers: []
	W1018 17:44:31.176793   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:31.176800   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:31.176894   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:31.209000   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:31.209022   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:31.209027   51251 cri.go:89] found id: ""
	I1018 17:44:31.209034   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:31.209092   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:31.213702   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:31.217030   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:31.217134   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:31.244577   51251 cri.go:89] found id: ""
	I1018 17:44:31.244600   51251 logs.go:282] 0 containers: []
	W1018 17:44:31.244608   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:31.244615   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:31.244694   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:31.276009   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:31.276030   51251 cri.go:89] found id: ""
	I1018 17:44:31.276037   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:31.276126   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:31.279948   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:31.280039   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:31.312074   51251 cri.go:89] found id: ""
	I1018 17:44:31.312098   51251 logs.go:282] 0 containers: []
	W1018 17:44:31.312108   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:31.312117   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:31.312146   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:31.374723   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:31.374758   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:31.402419   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:31.402446   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:31.430538   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:31.430564   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:31.512803   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:31.512837   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:31.614079   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:31.614114   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:31.681910   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:31.673049    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.673806    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.675573    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.676196    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.677982    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:31.673049    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.673806    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.675573    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.676196    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.677982    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:31.681935   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:31.681956   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:31.707698   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:31.707730   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:31.744929   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:31.745030   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:31.776082   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:31.776119   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:31.788990   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:31.789026   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:34.355514   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:34.366625   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:34.366689   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:34.394220   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:34.394241   51251 cri.go:89] found id: ""
	I1018 17:44:34.394249   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:34.394307   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:34.398229   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:34.398301   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:34.428966   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:34.428987   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:34.428991   51251 cri.go:89] found id: ""
	I1018 17:44:34.428999   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:34.429056   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:34.438000   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:34.443562   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:34.443638   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:34.470520   51251 cri.go:89] found id: ""
	I1018 17:44:34.470583   51251 logs.go:282] 0 containers: []
	W1018 17:44:34.470596   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:34.470603   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:34.470660   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:34.498015   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:34.498035   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:34.498040   51251 cri.go:89] found id: ""
	I1018 17:44:34.498047   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:34.498107   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:34.501820   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:34.505392   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:34.505508   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:34.531261   51251 cri.go:89] found id: ""
	I1018 17:44:34.531285   51251 logs.go:282] 0 containers: []
	W1018 17:44:34.531294   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:34.531301   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:34.531391   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:34.558417   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:34.558439   51251 cri.go:89] found id: ""
	I1018 17:44:34.558448   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:34.558506   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:34.562283   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:34.562397   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:34.589239   51251 cri.go:89] found id: ""
	I1018 17:44:34.589263   51251 logs.go:282] 0 containers: []
	W1018 17:44:34.589271   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:34.589280   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:34.589321   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:34.639508   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:34.639543   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:34.704073   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:34.704111   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:34.730079   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:34.730105   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:34.812757   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:34.812794   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:34.844323   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:34.844351   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:34.870994   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:34.871020   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:34.909712   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:34.909738   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:34.949435   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:34.949461   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:35.051363   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:35.051403   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:35.064297   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:35.064324   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:35.143040   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:35.134155    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.134888    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.136750    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.137513    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.139182    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:35.134155    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.134888    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.136750    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.137513    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.139182    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:37.644402   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:37.655473   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:37.655556   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:37.686712   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:37.686743   51251 cri.go:89] found id: ""
	I1018 17:44:37.686753   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:37.686818   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:37.690705   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:37.690780   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:37.717269   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:37.717288   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:37.717293   51251 cri.go:89] found id: ""
	I1018 17:44:37.717300   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:37.717365   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:37.721019   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:37.724434   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:37.724511   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:37.751507   51251 cri.go:89] found id: ""
	I1018 17:44:37.751529   51251 logs.go:282] 0 containers: []
	W1018 17:44:37.751548   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:37.751554   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:37.751612   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:37.780532   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:37.780550   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:37.780555   51251 cri.go:89] found id: ""
	I1018 17:44:37.780562   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:37.780620   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:37.784463   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:37.789038   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:37.789127   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:37.827207   51251 cri.go:89] found id: ""
	I1018 17:44:37.827234   51251 logs.go:282] 0 containers: []
	W1018 17:44:37.827243   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:37.827250   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:37.827328   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:37.854900   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:37.854962   51251 cri.go:89] found id: ""
	I1018 17:44:37.854986   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:37.855062   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:37.859902   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:37.859977   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:37.886300   51251 cri.go:89] found id: ""
	I1018 17:44:37.886365   51251 logs.go:282] 0 containers: []
	W1018 17:44:37.886388   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:37.886409   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:37.886446   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:37.984179   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:37.984212   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:38.054964   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:38.045702    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.046390    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.048099    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.048652    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.050343    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:38.045702    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.046390    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.048099    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.048652    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.050343    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:38.054994   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:38.055010   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:38.084660   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:38.084691   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:38.124518   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:38.124606   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:38.190852   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:38.190893   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:38.273991   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:38.274027   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:38.286517   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:38.286546   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:38.338543   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:38.338580   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:38.367716   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:38.367745   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:38.401155   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:38.401184   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:40.943389   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:40.954255   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:40.954330   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:40.990505   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:40.990526   51251 cri.go:89] found id: ""
	I1018 17:44:40.990535   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:40.990591   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:40.994301   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:40.994374   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:41.024101   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:41.024123   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:41.024128   51251 cri.go:89] found id: ""
	I1018 17:44:41.024135   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:41.024202   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:41.028135   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:41.031764   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:41.031846   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:41.058027   51251 cri.go:89] found id: ""
	I1018 17:44:41.058110   51251 logs.go:282] 0 containers: []
	W1018 17:44:41.058133   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:41.058154   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:41.058241   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:41.084363   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:41.084429   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:41.084447   51251 cri.go:89] found id: ""
	I1018 17:44:41.084468   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:41.084549   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:41.088275   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:41.091806   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:41.091872   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:41.119266   51251 cri.go:89] found id: ""
	I1018 17:44:41.119288   51251 logs.go:282] 0 containers: []
	W1018 17:44:41.119296   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:41.119302   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:41.119364   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:41.152142   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:41.152162   51251 cri.go:89] found id: ""
	I1018 17:44:41.152171   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:41.152233   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:41.155967   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:41.156039   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:41.183430   51251 cri.go:89] found id: ""
	I1018 17:44:41.183453   51251 logs.go:282] 0 containers: []
	W1018 17:44:41.183461   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:41.183470   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:41.183481   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:41.217575   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:41.217599   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:41.314633   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:41.314667   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:41.383386   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:41.373451    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.374006    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.375984    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.377691    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.379407    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:41.373451    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.374006    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.375984    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.377691    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.379407    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:41.383406   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:41.383419   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:41.446018   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:41.446089   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:41.488303   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:41.488335   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:41.520983   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:41.521012   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:41.604693   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:41.604726   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:41.638240   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:41.638266   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:41.649462   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:41.649486   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:41.674875   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:41.674902   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:44.238248   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:44.255175   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:44.255240   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:44.287509   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:44.287527   51251 cri.go:89] found id: ""
	I1018 17:44:44.287535   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:44.287592   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.292053   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:44.292125   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:44.323105   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:44.323123   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:44.323128   51251 cri.go:89] found id: ""
	I1018 17:44:44.323135   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:44.323191   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.327287   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.331002   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:44.331110   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:44.362329   51251 cri.go:89] found id: ""
	I1018 17:44:44.362393   51251 logs.go:282] 0 containers: []
	W1018 17:44:44.362415   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:44.362436   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:44.362517   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:44.393314   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:44.393384   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:44.393403   51251 cri.go:89] found id: ""
	I1018 17:44:44.393432   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:44.393510   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.397610   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.401568   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:44.401674   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:44.439288   51251 cri.go:89] found id: ""
	I1018 17:44:44.439350   51251 logs.go:282] 0 containers: []
	W1018 17:44:44.439370   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:44.439391   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:44.439473   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:44.477857   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:44.477920   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:44.477939   51251 cri.go:89] found id: ""
	I1018 17:44:44.477960   51251 logs.go:282] 2 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:44.478038   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.482903   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.487434   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:44.487551   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:44.527686   51251 cri.go:89] found id: ""
	I1018 17:44:44.527761   51251 logs.go:282] 0 containers: []
	W1018 17:44:44.527784   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:44.527823   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:44.527850   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:44.637841   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:44.637917   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:44.653818   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:44.653846   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:44.762008   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:44.751907    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.753161    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.755038    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.755967    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.757158    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:44.751907    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.753161    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.755038    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.755967    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.757158    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:44.762038   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:44.762067   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:44.798868   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:44.798900   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:44.850591   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:44.850634   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:44.938420   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:44:44.938472   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:44.980294   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:44.980372   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:45.089048   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:45.089096   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:45.196420   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:45.196522   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:45.246623   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:45.246803   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:45.295911   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:45.295955   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:47.851142   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:47.862455   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:47.862520   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:47.888902   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:47.888970   51251 cri.go:89] found id: ""
	I1018 17:44:47.888984   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:47.889042   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:47.893115   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:47.893208   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:47.923068   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:47.923087   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:47.923091   51251 cri.go:89] found id: ""
	I1018 17:44:47.923099   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:47.923170   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:47.927351   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:47.931468   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:47.931541   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:47.958620   51251 cri.go:89] found id: ""
	I1018 17:44:47.958642   51251 logs.go:282] 0 containers: []
	W1018 17:44:47.958651   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:47.958657   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:47.958717   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:47.988421   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:47.988494   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:47.988514   51251 cri.go:89] found id: ""
	I1018 17:44:47.988534   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:47.988616   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:47.992743   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:47.996667   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:47.996742   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:48.025533   51251 cri.go:89] found id: ""
	I1018 17:44:48.025560   51251 logs.go:282] 0 containers: []
	W1018 17:44:48.025568   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:48.025575   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:48.025654   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:48.053974   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:48.053997   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:48.054002   51251 cri.go:89] found id: ""
	I1018 17:44:48.054009   51251 logs.go:282] 2 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:48.054070   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:48.057945   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:48.061877   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:48.061953   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:48.090761   51251 cri.go:89] found id: ""
	I1018 17:44:48.090786   51251 logs.go:282] 0 containers: []
	W1018 17:44:48.090795   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:48.090805   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:48.090817   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:48.189723   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:48.189756   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:48.221709   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:48.221739   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:48.259440   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:48.259470   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:48.345516   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:48.345553   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:48.374446   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:48.374477   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:48.460806   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:48.460842   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:48.473713   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:48.473739   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:48.554183   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:48.545515    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.546813    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.547313    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.548898    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.549566    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:48.545515    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.546813    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.547313    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.548898    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.549566    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:48.554204   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:48.554217   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:48.609158   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:44:48.609190   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:48.636984   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:48.637062   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:48.664743   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:48.664822   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:51.198411   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:51.210016   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:51.210081   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:51.236981   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:51.237004   51251 cri.go:89] found id: ""
	I1018 17:44:51.237012   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:51.237077   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.240676   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:51.240750   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:51.269356   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:51.269382   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:51.269387   51251 cri.go:89] found id: ""
	I1018 17:44:51.269395   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:51.269453   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.273122   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.277060   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:51.277132   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:51.304766   51251 cri.go:89] found id: ""
	I1018 17:44:51.304790   51251 logs.go:282] 0 containers: []
	W1018 17:44:51.304799   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:51.304805   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:51.304865   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:51.332379   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:51.332401   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:51.332406   51251 cri.go:89] found id: ""
	I1018 17:44:51.332414   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:51.332474   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.336518   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.341898   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:51.341976   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:51.367678   51251 cri.go:89] found id: ""
	I1018 17:44:51.367708   51251 logs.go:282] 0 containers: []
	W1018 17:44:51.367726   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:51.367732   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:51.367796   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:51.394153   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:51.394175   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:51.394180   51251 cri.go:89] found id: ""
	I1018 17:44:51.394187   51251 logs.go:282] 2 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:51.394243   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.397993   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.401471   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:51.401578   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:51.428758   51251 cri.go:89] found id: ""
	I1018 17:44:51.428822   51251 logs.go:282] 0 containers: []
	W1018 17:44:51.428844   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:51.428870   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:51.428894   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:51.503688   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:51.495917    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.496423    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.498141    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.498547    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.500003    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:51.495917    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.496423    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.498141    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.498547    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.500003    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:51.503709   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:51.503722   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:51.532853   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:51.532878   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:51.596823   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:44:51.596858   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:51.623499   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:51.623527   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:51.653511   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:51.653538   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:51.743235   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:51.743280   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:51.775603   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:51.775632   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:51.875854   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:51.875890   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:51.893446   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:51.893471   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:51.928284   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:51.928316   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:51.997158   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:51.997193   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:54.531254   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:54.544073   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:54.544143   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:54.572505   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:54.572526   51251 cri.go:89] found id: ""
	I1018 17:44:54.572534   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:54.572589   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.576276   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:54.576349   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:54.608530   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:54.608552   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:54.608557   51251 cri.go:89] found id: ""
	I1018 17:44:54.608564   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:54.608620   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.612802   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.616507   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:54.616574   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:54.646887   51251 cri.go:89] found id: ""
	I1018 17:44:54.646909   51251 logs.go:282] 0 containers: []
	W1018 17:44:54.646918   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:54.646924   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:54.646985   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:54.673624   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:54.673641   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:54.673646   51251 cri.go:89] found id: ""
	I1018 17:44:54.673653   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:54.673708   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.677580   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.680915   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:54.681039   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:54.707856   51251 cri.go:89] found id: ""
	I1018 17:44:54.707882   51251 logs.go:282] 0 containers: []
	W1018 17:44:54.707890   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:54.707897   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:54.707985   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:54.739572   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:54.739596   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:54.739602   51251 cri.go:89] found id: ""
	I1018 17:44:54.739609   51251 logs.go:282] 2 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:54.739666   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.744278   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.747740   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:54.747812   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:54.786379   51251 cri.go:89] found id: ""
	I1018 17:44:54.786405   51251 logs.go:282] 0 containers: []
	W1018 17:44:54.786413   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:54.786423   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:54.786435   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:54.850541   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:54.850577   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:54.878112   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:54.878139   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:54.905434   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:54.905462   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:54.983610   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:54.974914    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.975800    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.977585    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.978207    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.979920    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:54.974914    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.975800    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.977585    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.978207    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.979920    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:54.983631   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:44:54.983643   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:55.018119   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:55.018148   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:55.096411   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:55.096446   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:55.134900   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:55.134926   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:55.237181   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:55.237214   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:55.250828   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:55.250858   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:55.281899   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:55.281928   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:55.339174   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:55.339208   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:57.880428   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:57.891159   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:57.891231   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:57.921966   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:57.921988   51251 cri.go:89] found id: ""
	I1018 17:44:57.921996   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:57.922051   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:57.925877   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:57.925946   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:57.983701   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:57.983719   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:57.983724   51251 cri.go:89] found id: ""
	I1018 17:44:57.983731   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:57.983785   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:57.988147   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:57.991948   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:57.992055   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:58.027455   51251 cri.go:89] found id: ""
	I1018 17:44:58.027489   51251 logs.go:282] 0 containers: []
	W1018 17:44:58.027498   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:58.027504   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:58.027572   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:58.061874   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:58.061896   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:58.061902   51251 cri.go:89] found id: ""
	I1018 17:44:58.061911   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:58.061971   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:58.065752   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:58.069525   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:58.069600   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:58.099676   51251 cri.go:89] found id: ""
	I1018 17:44:58.099698   51251 logs.go:282] 0 containers: []
	W1018 17:44:58.099707   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:58.099720   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:58.099778   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:58.132718   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:58.132740   51251 cri.go:89] found id: ""
	I1018 17:44:58.132748   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:44:58.132803   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:58.136641   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:58.136718   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:58.161767   51251 cri.go:89] found id: ""
	I1018 17:44:58.161791   51251 logs.go:282] 0 containers: []
	W1018 17:44:58.161799   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:58.161808   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:58.161820   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:58.239848   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:58.231755    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.232488    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.234323    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.234970    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.236249    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:58.231755    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.232488    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.234323    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.234970    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.236249    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:58.239867   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:58.239879   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:58.265229   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:58.265253   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:58.316459   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:58.316495   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:58.382736   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:58.382771   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:58.461400   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:58.461435   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:58.496880   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:58.496905   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:58.600326   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:58.600360   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:58.612833   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:58.612860   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:58.652792   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:58.652823   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:58.683598   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:44:58.683624   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:01.209276   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:01.221741   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:01.221825   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:01.255998   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:01.256020   51251 cri.go:89] found id: ""
	I1018 17:45:01.256029   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:01.256090   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:01.260323   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:01.260410   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:01.290623   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:01.290646   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:01.290652   51251 cri.go:89] found id: ""
	I1018 17:45:01.290660   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:01.290722   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:01.294923   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:01.299340   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:01.299421   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:01.328205   51251 cri.go:89] found id: ""
	I1018 17:45:01.328234   51251 logs.go:282] 0 containers: []
	W1018 17:45:01.328244   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:01.328251   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:01.328321   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:01.360099   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:01.360123   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:01.360128   51251 cri.go:89] found id: ""
	I1018 17:45:01.360136   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:01.360209   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:01.364283   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:01.368572   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:01.368657   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:01.397092   51251 cri.go:89] found id: ""
	I1018 17:45:01.397161   51251 logs.go:282] 0 containers: []
	W1018 17:45:01.397184   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:01.397207   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:01.397297   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:01.426452   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:01.426520   51251 cri.go:89] found id: ""
	I1018 17:45:01.426537   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:01.426623   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:01.430959   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:01.431090   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:01.460044   51251 cri.go:89] found id: ""
	I1018 17:45:01.460085   51251 logs.go:282] 0 containers: []
	W1018 17:45:01.460095   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:01.460126   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:01.460171   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:01.536047   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:01.536083   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:01.548838   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:01.548870   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:01.581436   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:01.581464   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:01.639347   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:01.639384   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:01.667540   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:01.667571   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:01.714304   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:01.714330   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:01.813430   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:01.813510   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:01.882898   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:01.873459    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.874354    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.876306    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.877166    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.878779    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:01.873459    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.874354    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.876306    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.877166    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.878779    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:01.882921   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:01.882937   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:01.917303   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:01.917407   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:01.999403   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:01.999445   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:04.533522   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:04.544111   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:04.544187   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:04.570770   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:04.570840   51251 cri.go:89] found id: ""
	I1018 17:45:04.570855   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:04.570912   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:04.575103   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:04.575198   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:04.609501   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:04.609532   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:04.609537   51251 cri.go:89] found id: ""
	I1018 17:45:04.609545   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:04.609600   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:04.613955   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:04.617439   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:04.617516   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:04.645280   51251 cri.go:89] found id: ""
	I1018 17:45:04.645306   51251 logs.go:282] 0 containers: []
	W1018 17:45:04.645315   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:04.645324   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:04.645392   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:04.672130   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:04.672153   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:04.672158   51251 cri.go:89] found id: ""
	I1018 17:45:04.672167   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:04.672223   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:04.676297   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:04.681021   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:04.681099   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:04.707420   51251 cri.go:89] found id: ""
	I1018 17:45:04.707444   51251 logs.go:282] 0 containers: []
	W1018 17:45:04.707452   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:04.707461   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:04.707517   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:04.737533   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:04.737555   51251 cri.go:89] found id: ""
	I1018 17:45:04.737565   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:04.737631   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:04.741271   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:04.741342   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:04.767657   51251 cri.go:89] found id: ""
	I1018 17:45:04.767681   51251 logs.go:282] 0 containers: []
	W1018 17:45:04.767689   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:04.767699   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:04.767710   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:04.863553   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:04.863587   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:04.875569   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:04.875600   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:04.930436   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:04.930476   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:04.969240   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:04.969276   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:05.039302   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:05.039336   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:05.067077   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:05.067103   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:05.148387   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:05.148422   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:05.223337   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:05.215470    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.216065    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.217641    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.218213    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.219737    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:05.215470    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.216065    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.217641    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.218213    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.219737    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:05.223369   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:05.223382   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:05.249066   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:05.249091   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:05.280440   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:05.280465   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:07.817192   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:07.827427   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:07.827497   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:07.853178   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:07.853198   51251 cri.go:89] found id: ""
	I1018 17:45:07.853206   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:07.853261   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:07.857004   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:07.857072   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:07.882619   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:07.882640   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:07.882645   51251 cri.go:89] found id: ""
	I1018 17:45:07.882652   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:07.882716   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:07.886518   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:07.890146   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:07.890220   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:07.917313   51251 cri.go:89] found id: ""
	I1018 17:45:07.917338   51251 logs.go:282] 0 containers: []
	W1018 17:45:07.917351   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:07.917358   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:07.917421   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:07.950191   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:07.950218   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:07.950223   51251 cri.go:89] found id: ""
	I1018 17:45:07.950234   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:07.950304   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:07.953933   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:07.957694   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:07.957770   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:07.990144   51251 cri.go:89] found id: ""
	I1018 17:45:07.990167   51251 logs.go:282] 0 containers: []
	W1018 17:45:07.990176   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:07.990183   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:07.990240   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:08.023638   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:08.023660   51251 cri.go:89] found id: ""
	I1018 17:45:08.023669   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:08.023729   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:08.028231   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:08.028307   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:08.056653   51251 cri.go:89] found id: ""
	I1018 17:45:08.056678   51251 logs.go:282] 0 containers: []
	W1018 17:45:08.056687   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:08.056696   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:08.056708   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:08.132641   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:08.122188    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.122913    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.124506    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.124806    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.126307    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:08.122188    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.122913    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.124506    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.124806    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.126307    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:08.132662   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:08.132677   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:08.197105   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:08.197143   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:08.238131   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:08.238157   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:08.266672   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:08.266701   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:08.302562   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:08.302587   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:08.411059   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:08.411103   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:08.423232   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:08.423261   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:08.449524   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:08.449549   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:08.505779   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:08.505811   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:08.540674   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:08.540708   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:11.118218   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:11.130399   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:11.130521   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:11.164618   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:11.164637   51251 cri.go:89] found id: ""
	I1018 17:45:11.164644   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:11.164700   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:11.168380   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:11.168453   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:11.195034   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:11.195059   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:11.195065   51251 cri.go:89] found id: ""
	I1018 17:45:11.195072   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:11.195126   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:11.199134   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:11.203492   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:11.203557   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:11.230659   51251 cri.go:89] found id: ""
	I1018 17:45:11.230681   51251 logs.go:282] 0 containers: []
	W1018 17:45:11.230689   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:11.230697   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:11.230773   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:11.256814   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:11.256842   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:11.256847   51251 cri.go:89] found id: ""
	I1018 17:45:11.256855   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:11.256973   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:11.260554   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:11.263940   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:11.264009   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:11.289036   51251 cri.go:89] found id: ""
	I1018 17:45:11.289114   51251 logs.go:282] 0 containers: []
	W1018 17:45:11.289128   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:11.289134   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:11.289192   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:11.320844   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:11.320867   51251 cri.go:89] found id: ""
	I1018 17:45:11.320875   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:11.320928   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:11.324471   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:11.324537   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:11.350002   51251 cri.go:89] found id: ""
	I1018 17:45:11.350028   51251 logs.go:282] 0 containers: []
	W1018 17:45:11.350036   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:11.350045   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:11.350057   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:11.415699   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:11.407276    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.408085    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.409925    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.410627    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.412208    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:11.407276    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.408085    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.409925    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.410627    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.412208    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:11.415719   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:11.415732   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:11.467144   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:11.467178   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:11.500116   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:11.500149   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:11.565053   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:11.565083   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:11.594806   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:11.594833   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:11.621385   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:11.621416   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:11.649391   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:11.649418   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:11.681270   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:11.681294   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:11.758017   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:11.758049   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:11.856363   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:11.856394   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:14.369690   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:14.380482   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:14.380582   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:14.406908   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:14.406929   51251 cri.go:89] found id: ""
	I1018 17:45:14.406937   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:14.406991   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:14.410922   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:14.410995   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:14.438715   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:14.438787   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:14.438805   51251 cri.go:89] found id: ""
	I1018 17:45:14.438825   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:14.438910   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:14.442634   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:14.446455   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:14.446583   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:14.472662   51251 cri.go:89] found id: ""
	I1018 17:45:14.472729   51251 logs.go:282] 0 containers: []
	W1018 17:45:14.472740   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:14.472749   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:14.472837   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:14.499722   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:14.499787   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:14.499804   51251 cri.go:89] found id: ""
	I1018 17:45:14.499826   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:14.499910   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:14.503638   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:14.507247   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:14.507364   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:14.534947   51251 cri.go:89] found id: ""
	I1018 17:45:14.534973   51251 logs.go:282] 0 containers: []
	W1018 17:45:14.534981   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:14.534987   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:14.535064   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:14.561664   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:14.561686   51251 cri.go:89] found id: ""
	I1018 17:45:14.561695   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:14.561753   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:14.565710   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:14.565806   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:14.595947   51251 cri.go:89] found id: ""
	I1018 17:45:14.595972   51251 logs.go:282] 0 containers: []
	W1018 17:45:14.595980   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:14.595990   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:14.596029   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:14.671772   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:14.671807   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:14.775531   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:14.775566   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:14.787782   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:14.787811   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:14.819786   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:14.819816   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:14.851924   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:14.851951   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:14.920046   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:14.911958    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.912762    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.914424    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.914744    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.916204    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:14.911958    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.912762    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.914424    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.914744    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.916204    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:14.920119   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:14.920139   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:14.977739   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:14.977775   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:15.032058   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:15.032091   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:15.102494   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:15.102529   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:15.138731   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:15.138757   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:17.666030   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:17.676690   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:17.676760   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:17.703559   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:17.703578   51251 cri.go:89] found id: ""
	I1018 17:45:17.703585   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:17.703638   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:17.707859   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:17.707930   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:17.735399   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:17.735422   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:17.735433   51251 cri.go:89] found id: ""
	I1018 17:45:17.735441   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:17.735498   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:17.739407   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:17.742711   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:17.742782   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:17.773860   51251 cri.go:89] found id: ""
	I1018 17:45:17.773930   51251 logs.go:282] 0 containers: []
	W1018 17:45:17.773946   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:17.773953   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:17.774014   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:17.800989   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:17.801015   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:17.801021   51251 cri.go:89] found id: ""
	I1018 17:45:17.801028   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:17.801094   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:17.805064   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:17.808714   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:17.808845   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:17.835041   51251 cri.go:89] found id: ""
	I1018 17:45:17.835065   51251 logs.go:282] 0 containers: []
	W1018 17:45:17.835073   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:17.835080   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:17.835141   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:17.866314   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:17.866337   51251 cri.go:89] found id: ""
	I1018 17:45:17.866345   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:17.866406   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:17.870038   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:17.870110   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:17.895894   51251 cri.go:89] found id: ""
	I1018 17:45:17.895916   51251 logs.go:282] 0 containers: []
	W1018 17:45:17.895925   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:17.895934   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:17.895945   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:17.998692   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:17.998766   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:18.015153   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:18.015182   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:18.068223   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:18.068259   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:18.154314   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:18.154356   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:18.243477   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:18.234737    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.235447    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.237270    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.237840    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.239403    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:18.234737    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.235447    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.237270    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.237840    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.239403    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:18.243497   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:18.243509   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:18.275940   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:18.275970   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:18.316930   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:18.316995   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:18.389081   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:18.389116   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:18.418930   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:18.418956   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:18.449161   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:18.449188   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:20.980259   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:20.991356   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:20.991427   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:21.028373   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:21.028396   51251 cri.go:89] found id: ""
	I1018 17:45:21.028404   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:21.028462   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:21.031989   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:21.032060   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:21.061105   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:21.061126   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:21.061138   51251 cri.go:89] found id: ""
	I1018 17:45:21.061147   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:21.061206   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:21.064983   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:21.068555   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:21.068622   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:21.095318   51251 cri.go:89] found id: ""
	I1018 17:45:21.095340   51251 logs.go:282] 0 containers: []
	W1018 17:45:21.095348   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:21.095354   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:21.095410   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:21.132132   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:21.132167   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:21.132172   51251 cri.go:89] found id: ""
	I1018 17:45:21.132195   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:21.132278   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:21.136778   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:21.140214   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:21.140288   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:21.172583   51251 cri.go:89] found id: ""
	I1018 17:45:21.172605   51251 logs.go:282] 0 containers: []
	W1018 17:45:21.172614   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:21.172620   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:21.172675   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:21.203092   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:21.203113   51251 cri.go:89] found id: ""
	I1018 17:45:21.203121   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:21.203176   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:21.207592   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:21.207657   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:21.235546   51251 cri.go:89] found id: ""
	I1018 17:45:21.235570   51251 logs.go:282] 0 containers: []
	W1018 17:45:21.235580   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:21.235589   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:21.235635   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:21.332614   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:21.332652   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:21.360929   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:21.361068   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:21.401211   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:21.401249   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:21.468558   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:21.468594   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:21.498171   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:21.498196   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:21.576112   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:21.576147   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:21.607742   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:21.607775   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:21.619918   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:21.619943   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:21.687350   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:21.679038    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.679743    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.681303    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.681885    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.683555    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:21.679038    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.679743    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.681303    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.681885    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.683555    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:21.687371   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:21.687384   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:21.742021   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:21.742057   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:24.270296   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:24.281336   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:24.281412   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:24.310155   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:24.310176   51251 cri.go:89] found id: ""
	I1018 17:45:24.310184   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:24.310236   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:24.314848   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:24.314949   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:24.343101   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:24.343140   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:24.343146   51251 cri.go:89] found id: ""
	I1018 17:45:24.343154   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:24.343214   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:24.347137   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:24.350301   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:24.350364   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:24.375739   51251 cri.go:89] found id: ""
	I1018 17:45:24.375763   51251 logs.go:282] 0 containers: []
	W1018 17:45:24.375774   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:24.375787   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:24.375845   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:24.414912   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:24.414933   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:24.414944   51251 cri.go:89] found id: ""
	I1018 17:45:24.414952   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:24.415006   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:24.419585   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:24.423104   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:24.423211   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:24.449615   51251 cri.go:89] found id: ""
	I1018 17:45:24.449639   51251 logs.go:282] 0 containers: []
	W1018 17:45:24.449647   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:24.449653   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:24.449709   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:24.476036   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:24.476057   51251 cri.go:89] found id: ""
	I1018 17:45:24.476065   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:24.476126   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:24.479757   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:24.479825   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:24.512386   51251 cri.go:89] found id: ""
	I1018 17:45:24.512409   51251 logs.go:282] 0 containers: []
	W1018 17:45:24.512417   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:24.512426   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:24.512438   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:24.538617   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:24.538645   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:24.592949   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:24.592984   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:24.621215   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:24.621242   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:24.697575   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:24.697611   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:24.769130   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:24.760873    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.761713    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.763257    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.763723    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.765324    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:24.760873    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.761713    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.763257    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.763723    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.765324    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:24.769206   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:24.769228   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:24.807477   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:24.807508   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:24.880464   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:24.880506   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:24.913114   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:24.913140   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:24.946306   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:24.946335   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:25.051970   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:25.052004   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:27.565286   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:27.576658   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:27.576726   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:27.613181   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:27.613202   51251 cri.go:89] found id: ""
	I1018 17:45:27.613210   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:27.613264   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:27.617394   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:27.617462   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:27.645391   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:27.645413   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:27.645418   51251 cri.go:89] found id: ""
	I1018 17:45:27.645426   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:27.645494   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:27.649249   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:27.652792   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:27.652866   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:27.679303   51251 cri.go:89] found id: ""
	I1018 17:45:27.679368   51251 logs.go:282] 0 containers: []
	W1018 17:45:27.679390   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:27.679408   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:27.679492   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:27.705387   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:27.705453   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:27.705466   51251 cri.go:89] found id: ""
	I1018 17:45:27.705475   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:27.705532   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:27.709305   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:27.713679   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:27.713761   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:27.740178   51251 cri.go:89] found id: ""
	I1018 17:45:27.740203   51251 logs.go:282] 0 containers: []
	W1018 17:45:27.740211   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:27.740218   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:27.740277   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:27.768320   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:27.768342   51251 cri.go:89] found id: ""
	I1018 17:45:27.768351   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:27.768416   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:27.772360   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:27.772471   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:27.797997   51251 cri.go:89] found id: ""
	I1018 17:45:27.798018   51251 logs.go:282] 0 containers: []
	W1018 17:45:27.798026   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:27.798049   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:27.798061   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:27.824302   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:27.824379   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:27.859099   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:27.859131   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:27.889803   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:27.889830   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:27.902196   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:27.902221   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:27.958924   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:27.958960   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:28.038453   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:28.038489   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:28.067717   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:28.067748   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:28.156959   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:28.156998   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:28.189533   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:28.189561   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:28.296814   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:28.296848   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:28.370306   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:28.360661    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.362171    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.362714    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.364316    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.364866    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:28.360661    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.362171    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.362714    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.364316    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.364866    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:30.870515   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:30.881788   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:30.881863   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:30.910070   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:30.910091   51251 cri.go:89] found id: ""
	I1018 17:45:30.910099   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:30.910154   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:30.914699   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:30.914767   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:30.944925   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:30.944970   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:30.944975   51251 cri.go:89] found id: ""
	I1018 17:45:30.944982   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:30.945037   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:30.948747   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:30.954312   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:30.954375   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:30.992317   51251 cri.go:89] found id: ""
	I1018 17:45:30.992339   51251 logs.go:282] 0 containers: []
	W1018 17:45:30.992347   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:30.992353   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:30.992409   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:31.020830   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:31.020849   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:31.020853   51251 cri.go:89] found id: ""
	I1018 17:45:31.020860   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:31.020918   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:31.025302   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:31.028979   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:31.029048   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:31.066137   51251 cri.go:89] found id: ""
	I1018 17:45:31.066238   51251 logs.go:282] 0 containers: []
	W1018 17:45:31.066262   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:31.066295   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:31.066401   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:31.093628   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:31.093651   51251 cri.go:89] found id: ""
	I1018 17:45:31.093659   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:31.093747   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:31.097751   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:31.097830   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:31.126496   51251 cri.go:89] found id: ""
	I1018 17:45:31.126517   51251 logs.go:282] 0 containers: []
	W1018 17:45:31.126526   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:31.126535   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:31.126547   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:31.199157   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:31.190529    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.191738    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.193086    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.193754    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.195583    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:31.190529    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.191738    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.193086    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.193754    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.195583    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:31.199180   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:31.199192   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:31.227645   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:31.227672   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:31.299176   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:31.299211   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:31.331846   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:31.331870   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:31.408603   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:31.408637   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:31.443678   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:31.443708   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:31.543336   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:31.543370   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:31.584237   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:31.584267   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:31.657778   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:31.657815   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:31.687304   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:31.687331   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:34.200278   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:34.213848   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:34.213915   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:34.240838   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:34.240860   51251 cri.go:89] found id: ""
	I1018 17:45:34.240874   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:34.240930   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:34.244825   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:34.244901   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:34.271020   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:34.271040   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:34.271044   51251 cri.go:89] found id: ""
	I1018 17:45:34.271052   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:34.271106   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:34.274974   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:34.278648   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:34.278748   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:34.306959   51251 cri.go:89] found id: ""
	I1018 17:45:34.306980   51251 logs.go:282] 0 containers: []
	W1018 17:45:34.306988   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:34.307023   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:34.307092   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:34.332551   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:34.332573   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:34.332578   51251 cri.go:89] found id: ""
	I1018 17:45:34.332585   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:34.332641   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:34.336514   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:34.340414   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:34.340491   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:34.366530   51251 cri.go:89] found id: ""
	I1018 17:45:34.366556   51251 logs.go:282] 0 containers: []
	W1018 17:45:34.366566   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:34.366572   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:34.366633   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:34.393555   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:34.393573   51251 cri.go:89] found id: ""
	I1018 17:45:34.393581   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:34.393637   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:34.397566   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:34.397635   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:34.424542   51251 cri.go:89] found id: ""
	I1018 17:45:34.424566   51251 logs.go:282] 0 containers: []
	W1018 17:45:34.424575   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:34.424584   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:34.424595   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:34.436112   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:34.436137   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:34.507631   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:34.499819    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.500689    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.501741    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.502269    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.503964    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:34.499819    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.500689    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.501741    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.502269    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.503964    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:34.507654   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:34.507666   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:34.562029   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:34.562062   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:34.599739   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:34.599770   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:34.628468   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:34.628493   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:34.702022   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:34.702053   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:34.731823   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:34.731851   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:34.830492   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:34.830526   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:34.860325   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:34.860350   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:34.928523   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:34.928564   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:37.460864   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:37.472124   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:37.472190   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:37.499832   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:37.499854   51251 cri.go:89] found id: ""
	I1018 17:45:37.499862   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:37.499920   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:37.503595   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:37.503663   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:37.531543   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:37.531563   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:37.531569   51251 cri.go:89] found id: ""
	I1018 17:45:37.531576   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:37.531630   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:37.535265   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:37.538643   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:37.538712   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:37.565328   51251 cri.go:89] found id: ""
	I1018 17:45:37.565359   51251 logs.go:282] 0 containers: []
	W1018 17:45:37.565368   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:37.565374   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:37.565434   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:37.602468   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:37.602489   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:37.602494   51251 cri.go:89] found id: ""
	I1018 17:45:37.602501   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:37.602557   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:37.606311   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:37.609849   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:37.609919   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:37.640018   51251 cri.go:89] found id: ""
	I1018 17:45:37.640087   51251 logs.go:282] 0 containers: []
	W1018 17:45:37.640110   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:37.640131   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:37.640216   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:37.666232   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:37.666305   51251 cri.go:89] found id: ""
	I1018 17:45:37.666334   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:37.666402   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:37.669826   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:37.669905   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:37.696068   51251 cri.go:89] found id: ""
	I1018 17:45:37.696104   51251 logs.go:282] 0 containers: []
	W1018 17:45:37.696112   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:37.696121   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:37.696158   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:37.767014   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:37.767049   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:37.799133   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:37.799158   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:37.883995   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:37.884029   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:37.919112   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:37.919145   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:37.968245   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:37.968269   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:38.008695   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:38.008740   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:38.109431   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:38.109506   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:38.124458   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:38.124529   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:38.217277   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:38.191743    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.192499    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.207164    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.208077    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.209702    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:38.191743    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.192499    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.207164    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.208077    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.209702    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:38.217297   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:38.217310   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:38.247001   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:38.247027   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:40.816985   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:40.827390   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:40.827474   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:40.854344   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:40.854363   51251 cri.go:89] found id: ""
	I1018 17:45:40.854371   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:40.854426   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:40.858780   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:40.858879   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:40.888649   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:40.888707   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:40.888726   51251 cri.go:89] found id: ""
	I1018 17:45:40.888754   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:40.888823   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:40.893141   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:40.897039   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:40.897111   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:40.930280   51251 cri.go:89] found id: ""
	I1018 17:45:40.930304   51251 logs.go:282] 0 containers: []
	W1018 17:45:40.930313   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:40.930319   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:40.930375   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:40.957741   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:40.957764   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:40.957769   51251 cri.go:89] found id: ""
	I1018 17:45:40.957777   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:40.957854   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:40.962938   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:40.967322   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:40.967388   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:40.995139   51251 cri.go:89] found id: ""
	I1018 17:45:40.995216   51251 logs.go:282] 0 containers: []
	W1018 17:45:40.995230   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:40.995237   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:40.995304   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:41.025259   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:41.025280   51251 cri.go:89] found id: ""
	I1018 17:45:41.025287   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:41.025344   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:41.029459   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:41.029553   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:41.055678   51251 cri.go:89] found id: ""
	I1018 17:45:41.055710   51251 logs.go:282] 0 containers: []
	W1018 17:45:41.055719   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:41.055728   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:41.055745   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:41.097365   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:41.097395   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:41.108644   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:41.108669   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:41.152656   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:41.152685   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:41.240199   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:41.240234   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:41.347931   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:41.347967   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:41.414489   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:41.405260    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.405872    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.407642    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.408232    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.410751    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:41.405260    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.405872    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.407642    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.408232    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.410751    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:41.414511   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:41.414525   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:41.440777   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:41.440802   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:41.496567   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:41.496602   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:41.569402   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:41.569445   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:41.599116   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:41.599143   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:44.128092   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:44.139312   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:44.139380   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:44.166514   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:44.166533   51251 cri.go:89] found id: ""
	I1018 17:45:44.166541   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:44.166596   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:44.170245   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:44.170317   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:44.210379   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:44.210397   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:44.210402   51251 cri.go:89] found id: ""
	I1018 17:45:44.210410   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:44.210464   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:44.214239   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:44.217585   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:44.217650   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:44.242978   51251 cri.go:89] found id: ""
	I1018 17:45:44.243001   51251 logs.go:282] 0 containers: []
	W1018 17:45:44.243009   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:44.243016   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:44.243069   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:44.270660   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:44.270680   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:44.270685   51251 cri.go:89] found id: ""
	I1018 17:45:44.270692   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:44.270746   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:44.274435   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:44.278022   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:44.278090   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:44.314849   51251 cri.go:89] found id: ""
	I1018 17:45:44.314873   51251 logs.go:282] 0 containers: []
	W1018 17:45:44.314881   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:44.314887   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:44.314951   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:44.345002   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:44.345025   51251 cri.go:89] found id: ""
	I1018 17:45:44.345034   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:44.345091   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:44.348718   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:44.348785   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:44.373779   51251 cri.go:89] found id: ""
	I1018 17:45:44.373804   51251 logs.go:282] 0 containers: []
	W1018 17:45:44.373812   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:44.373828   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:44.373839   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:44.448448   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:44.448482   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:44.479822   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:44.479848   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:44.583615   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:44.583649   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:44.597191   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:44.597217   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:44.623357   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:44.623385   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:44.680939   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:44.680970   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:44.715142   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:44.715173   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:44.742106   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:44.742133   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:44.808539   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:44.799128    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.799968    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.801462    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.801790    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.803327    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:44.799128    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.799968    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.801462    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.801790    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.803327    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:44.808609   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:44.808640   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:44.878644   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:44.878682   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:47.415612   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:47.426226   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:47.426291   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:47.453489   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:47.453509   51251 cri.go:89] found id: ""
	I1018 17:45:47.453517   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:47.453571   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:47.457326   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:47.457406   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:47.482854   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:47.482921   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:47.482931   51251 cri.go:89] found id: ""
	I1018 17:45:47.482939   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:47.482996   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:47.487182   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:47.490682   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:47.490788   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:47.518326   51251 cri.go:89] found id: ""
	I1018 17:45:47.518348   51251 logs.go:282] 0 containers: []
	W1018 17:45:47.518357   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:47.518364   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:47.518423   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:47.545707   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:47.545729   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:47.545734   51251 cri.go:89] found id: ""
	I1018 17:45:47.545742   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:47.545795   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:47.549377   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:47.552749   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:47.552816   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:47.578086   51251 cri.go:89] found id: ""
	I1018 17:45:47.578108   51251 logs.go:282] 0 containers: []
	W1018 17:45:47.578116   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:47.578122   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:47.578179   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:47.621041   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:47.621110   51251 cri.go:89] found id: ""
	I1018 17:45:47.621124   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:47.621185   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:47.624873   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:47.624982   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:47.651153   51251 cri.go:89] found id: ""
	I1018 17:45:47.651180   51251 logs.go:282] 0 containers: []
	W1018 17:45:47.651189   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:47.651198   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:47.651227   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:47.748488   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:47.748523   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:47.816047   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:47.807483    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.808149    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.809893    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.810874    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.812453    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:47.807483    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.808149    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.809893    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.810874    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.812453    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:47.816068   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:47.816080   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:47.845226   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:47.845251   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:47.898646   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:47.898681   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:47.939440   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:47.939471   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:47.973436   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:47.973499   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:48.008222   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:48.008264   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:48.022115   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:48.022146   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:48.101167   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:48.101270   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:48.133470   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:48.133539   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:50.714735   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:50.728888   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:50.729016   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:50.759926   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:50.759949   51251 cri.go:89] found id: ""
	I1018 17:45:50.759958   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:50.760018   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:50.764094   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:50.764177   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:50.790739   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:50.790770   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:50.790776   51251 cri.go:89] found id: ""
	I1018 17:45:50.790784   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:50.790848   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:50.794745   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:50.798617   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:50.798692   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:50.827817   51251 cri.go:89] found id: ""
	I1018 17:45:50.827854   51251 logs.go:282] 0 containers: []
	W1018 17:45:50.827863   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:50.827870   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:50.827952   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:50.856700   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:50.856719   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:50.856723   51251 cri.go:89] found id: ""
	I1018 17:45:50.856731   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:50.856784   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:50.860815   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:50.864675   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:50.864745   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:50.889856   51251 cri.go:89] found id: ""
	I1018 17:45:50.889881   51251 logs.go:282] 0 containers: []
	W1018 17:45:50.889889   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:50.889896   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:50.889976   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:50.918684   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:50.918708   51251 cri.go:89] found id: ""
	I1018 17:45:50.918716   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:50.918800   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:50.924460   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:50.924531   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:50.951436   51251 cri.go:89] found id: ""
	I1018 17:45:50.951457   51251 logs.go:282] 0 containers: []
	W1018 17:45:50.951465   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:50.951475   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:50.951491   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:50.967914   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:50.967945   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:51.025758   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:51.025791   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:51.076423   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:51.076458   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:51.107878   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:51.107909   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:51.140881   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:51.140910   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:51.218816   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:51.218847   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:51.285410   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:51.278013    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.278510    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.279993    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.280335    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.281812    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:51.278013    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.278510    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.279993    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.280335    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.281812    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:51.285432   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:51.285444   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:51.314747   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:51.314775   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:51.388168   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:51.388242   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:51.424772   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:51.424801   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:54.026323   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:54.037679   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:54.037753   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:54.064502   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:54.064524   51251 cri.go:89] found id: ""
	I1018 17:45:54.064532   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:54.064585   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:54.068305   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:54.068376   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:54.097996   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:54.098018   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:54.098023   51251 cri.go:89] found id: ""
	I1018 17:45:54.098031   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:54.098085   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:54.102024   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:54.105866   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:54.105944   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:54.139891   51251 cri.go:89] found id: ""
	I1018 17:45:54.139915   51251 logs.go:282] 0 containers: []
	W1018 17:45:54.139924   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:54.139931   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:54.139986   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:54.166319   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:54.166343   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:54.166347   51251 cri.go:89] found id: ""
	I1018 17:45:54.166355   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:54.166420   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:54.170521   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:54.174527   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:54.174590   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:54.219178   51251 cri.go:89] found id: ""
	I1018 17:45:54.219212   51251 logs.go:282] 0 containers: []
	W1018 17:45:54.219220   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:54.219227   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:54.219283   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:54.246579   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:54.246602   51251 cri.go:89] found id: ""
	I1018 17:45:54.246610   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:54.246667   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:54.250546   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:54.250651   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:54.282408   51251 cri.go:89] found id: ""
	I1018 17:45:54.282432   51251 logs.go:282] 0 containers: []
	W1018 17:45:54.282440   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:54.282449   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:54.282460   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:54.367430   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:54.348041    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.348865    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.361407    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.362108    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.363737    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:54.348041    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.348865    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.361407    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.362108    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.363737    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:54.367454   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:54.367467   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:54.393831   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:54.393863   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:54.435123   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:54.435155   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:54.491144   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:54.491188   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:54.527193   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:54.527223   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:54.604327   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:54.604369   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:54.636282   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:54.636312   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:54.714664   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:54.714698   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:54.752480   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:54.752508   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:54.858349   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:54.858422   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:57.373300   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:57.384246   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:57.384335   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:57.415506   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:57.415571   51251 cri.go:89] found id: ""
	I1018 17:45:57.415595   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:57.415671   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:57.419389   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:57.419503   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:57.445186   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:57.445206   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:57.445211   51251 cri.go:89] found id: ""
	I1018 17:45:57.445219   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:57.445281   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:57.449004   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:57.452413   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:57.452492   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:57.477864   51251 cri.go:89] found id: ""
	I1018 17:45:57.477888   51251 logs.go:282] 0 containers: []
	W1018 17:45:57.477896   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:57.477903   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:57.477962   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:57.504898   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:57.504920   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:57.504931   51251 cri.go:89] found id: ""
	I1018 17:45:57.504977   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:57.505034   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:57.509061   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:57.513614   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:57.513685   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:57.544310   51251 cri.go:89] found id: ""
	I1018 17:45:57.544332   51251 logs.go:282] 0 containers: []
	W1018 17:45:57.544340   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:57.544346   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:57.544403   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:57.571245   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:57.571266   51251 cri.go:89] found id: ""
	I1018 17:45:57.571274   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:57.571331   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:57.575106   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:57.575176   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:57.606111   51251 cri.go:89] found id: ""
	I1018 17:45:57.606144   51251 logs.go:282] 0 containers: []
	W1018 17:45:57.606154   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:57.606162   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:57.606175   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:57.634184   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:57.634212   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:57.700157   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:57.700193   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:57.740730   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:57.740759   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:57.767473   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:57.767501   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:57.792761   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:57.792788   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:57.872610   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:57.872686   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:57.970465   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:57.970503   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:57.983943   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:57.983969   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:58.065431   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:58.056364    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.057407    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.058182    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.059825    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.060434    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:58.056364    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.057407    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.058182    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.059825    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.060434    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:58.065498   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:58.065512   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:58.140361   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:58.140407   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:00.709339   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:00.720914   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:00.721109   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:00.749016   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:00.749036   51251 cri.go:89] found id: ""
	I1018 17:46:00.749043   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:00.749098   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:00.752785   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:00.752913   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:00.780089   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:00.780157   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:00.780174   51251 cri.go:89] found id: ""
	I1018 17:46:00.780195   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:00.780277   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:00.784027   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:00.787918   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:00.787984   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:00.815886   51251 cri.go:89] found id: ""
	I1018 17:46:00.815911   51251 logs.go:282] 0 containers: []
	W1018 17:46:00.815920   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:00.815927   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:00.815984   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:00.843641   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:00.843672   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:00.843677   51251 cri.go:89] found id: ""
	I1018 17:46:00.843690   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:00.843749   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:00.857213   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:00.861599   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:00.861750   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:00.895883   51251 cri.go:89] found id: ""
	I1018 17:46:00.895957   51251 logs.go:282] 0 containers: []
	W1018 17:46:00.895981   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:00.896000   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:00.896070   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:00.925992   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:00.926061   51251 cri.go:89] found id: ""
	I1018 17:46:00.926086   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:00.926167   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:00.930024   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:00.930108   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:00.958457   51251 cri.go:89] found id: ""
	I1018 17:46:00.958482   51251 logs.go:282] 0 containers: []
	W1018 17:46:00.958490   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:00.958499   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:00.958511   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:01.035152   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:01.035187   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:01.069631   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:01.069662   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:01.099442   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:01.099466   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:01.185919   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:01.185957   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:01.233776   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:01.233801   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:01.247414   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:01.247442   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:01.275612   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:01.275640   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:01.332794   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:01.332829   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:01.367809   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:01.367840   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:01.464892   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:01.464929   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:01.535577   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:01.527773   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.528316   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.530190   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.530564   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.531863   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:01.527773   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.528316   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.530190   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.530564   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.531863   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:04.037058   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:04.047958   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:04.048043   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:04.080745   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:04.080770   51251 cri.go:89] found id: ""
	I1018 17:46:04.080779   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:04.080837   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:04.084749   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:04.084819   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:04.113194   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:04.113268   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:04.113275   51251 cri.go:89] found id: ""
	I1018 17:46:04.113283   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:04.113374   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:04.117058   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:04.121021   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:04.121088   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:04.150209   51251 cri.go:89] found id: ""
	I1018 17:46:04.150233   51251 logs.go:282] 0 containers: []
	W1018 17:46:04.150242   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:04.150248   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:04.150308   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:04.182648   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:04.182719   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:04.182732   51251 cri.go:89] found id: ""
	I1018 17:46:04.182740   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:04.182811   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:04.187068   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:04.191187   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:04.191265   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:04.226123   51251 cri.go:89] found id: ""
	I1018 17:46:04.226147   51251 logs.go:282] 0 containers: []
	W1018 17:46:04.226158   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:04.226165   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:04.226226   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:04.252111   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:04.252132   51251 cri.go:89] found id: ""
	I1018 17:46:04.252141   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:04.252196   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:04.255953   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:04.256026   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:04.287389   51251 cri.go:89] found id: ""
	I1018 17:46:04.287415   51251 logs.go:282] 0 containers: []
	W1018 17:46:04.287423   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:04.287432   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:04.287443   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:04.321947   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:04.321973   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:04.430342   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:04.430376   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:04.442744   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:04.442769   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:04.506948   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:04.498862   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.499448   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.501006   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.501596   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.503108   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:04.498862   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.499448   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.501006   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.501596   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.503108   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:04.507014   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:04.507043   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:04.543328   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:04.543361   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:04.572765   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:04.572798   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:04.602775   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:04.602801   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:04.658777   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:04.658812   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:04.732490   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:04.732537   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:04.759977   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:04.760005   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:07.339053   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:07.349656   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:07.349760   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:07.379978   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:07.380001   51251 cri.go:89] found id: ""
	I1018 17:46:07.380011   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:07.380093   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:07.383927   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:07.384018   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:07.409769   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:07.409800   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:07.409806   51251 cri.go:89] found id: ""
	I1018 17:46:07.409814   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:07.409902   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:07.413658   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:07.416960   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:07.417067   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:07.442892   51251 cri.go:89] found id: ""
	I1018 17:46:07.442916   51251 logs.go:282] 0 containers: []
	W1018 17:46:07.442924   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:07.442930   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:07.442989   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:07.469419   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:07.469440   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:07.469445   51251 cri.go:89] found id: ""
	I1018 17:46:07.469452   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:07.469508   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:07.473607   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:07.477386   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:07.477501   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:07.504080   51251 cri.go:89] found id: ""
	I1018 17:46:07.504105   51251 logs.go:282] 0 containers: []
	W1018 17:46:07.504116   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:07.504122   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:07.504231   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:07.531758   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:07.531781   51251 cri.go:89] found id: ""
	I1018 17:46:07.531790   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:07.531870   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:07.535733   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:07.535830   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:07.564437   51251 cri.go:89] found id: ""
	I1018 17:46:07.564463   51251 logs.go:282] 0 containers: []
	W1018 17:46:07.564471   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:07.564480   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:07.564524   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:07.628243   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:07.628278   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:07.662025   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:07.662052   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:07.764863   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:07.764897   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:07.776837   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:07.776865   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:07.847586   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:07.839604   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.840186   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.841835   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.842344   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.843875   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:07.839604   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.840186   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.841835   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.842344   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.843875   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:07.847606   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:07.847622   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:07.880085   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:07.880117   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:07.963636   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:07.963671   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:07.994194   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:07.994222   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:08.025564   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:08.025595   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:08.108415   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:08.108451   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:10.642798   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:10.653476   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:10.653548   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:10.679376   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:10.679398   51251 cri.go:89] found id: ""
	I1018 17:46:10.679407   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:10.679465   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:10.683355   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:10.683427   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:10.710429   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:10.710450   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:10.710454   51251 cri.go:89] found id: ""
	I1018 17:46:10.710461   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:10.710513   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:10.714130   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:10.717443   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:10.717506   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:10.744042   51251 cri.go:89] found id: ""
	I1018 17:46:10.744064   51251 logs.go:282] 0 containers: []
	W1018 17:46:10.744071   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:10.744078   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:10.744132   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:10.773166   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:10.773191   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:10.773196   51251 cri.go:89] found id: ""
	I1018 17:46:10.773203   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:10.773282   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:10.777442   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:10.781226   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:10.781299   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:10.808886   51251 cri.go:89] found id: ""
	I1018 17:46:10.808909   51251 logs.go:282] 0 containers: []
	W1018 17:46:10.808917   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:10.808924   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:10.809009   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:10.836634   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:10.836656   51251 cri.go:89] found id: ""
	I1018 17:46:10.836664   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:10.836720   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:10.840695   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:10.840772   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:10.869735   51251 cri.go:89] found id: ""
	I1018 17:46:10.869799   51251 logs.go:282] 0 containers: []
	W1018 17:46:10.869812   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:10.869822   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:10.869833   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:10.949626   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:10.949665   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:11.057346   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:11.057383   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:11.139105   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:11.139141   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:11.170764   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:11.170861   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:11.214148   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:11.214173   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:11.245381   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:11.245409   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:11.258609   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:11.258636   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:11.329040   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:11.320826   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.321453   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.322971   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.323467   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.325006   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:11.320826   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.321453   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.322971   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.323467   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.325006   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:11.329060   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:11.329072   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:11.354686   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:11.354710   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:11.393844   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:11.393872   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:13.965067   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:13.977065   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:13.977139   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:14.006565   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:14.006590   51251 cri.go:89] found id: ""
	I1018 17:46:14.006600   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:14.006694   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:14.011312   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:14.011387   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:14.040339   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:14.040367   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:14.040372   51251 cri.go:89] found id: ""
	I1018 17:46:14.040380   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:14.040437   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:14.044065   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:14.047760   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:14.047831   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:14.074918   51251 cri.go:89] found id: ""
	I1018 17:46:14.074943   51251 logs.go:282] 0 containers: []
	W1018 17:46:14.074952   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:14.074960   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:14.075023   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:14.107504   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:14.107526   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:14.107531   51251 cri.go:89] found id: ""
	I1018 17:46:14.107539   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:14.107591   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:14.111227   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:14.114719   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:14.114811   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:14.145967   51251 cri.go:89] found id: ""
	I1018 17:46:14.146042   51251 logs.go:282] 0 containers: []
	W1018 17:46:14.146062   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:14.146082   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:14.146164   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:14.186824   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:14.186888   51251 cri.go:89] found id: ""
	I1018 17:46:14.186910   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:14.186990   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:14.190545   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:14.190628   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:14.226876   51251 cri.go:89] found id: ""
	I1018 17:46:14.226971   51251 logs.go:282] 0 containers: []
	W1018 17:46:14.226994   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:14.227020   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:14.227045   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:14.329164   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:14.329201   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:14.397274   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:14.389270   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.390097   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.391638   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.392076   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.393694   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:14.389270   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.390097   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.391638   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.392076   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.393694   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:14.397296   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:14.397309   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:14.426769   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:14.426796   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:14.486615   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:14.486650   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:14.559349   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:14.559386   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:14.587426   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:14.587455   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:14.664068   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:14.664104   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:14.675861   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:14.675886   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:14.708879   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:14.708911   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:14.736861   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:14.736890   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:17.281896   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:17.292988   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:17.293081   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:17.321611   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:17.321634   51251 cri.go:89] found id: ""
	I1018 17:46:17.321642   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:17.321697   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:17.325317   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:17.325398   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:17.352512   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:17.352534   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:17.352538   51251 cri.go:89] found id: ""
	I1018 17:46:17.352546   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:17.352599   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:17.357098   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:17.360560   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:17.360677   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:17.390732   51251 cri.go:89] found id: ""
	I1018 17:46:17.390762   51251 logs.go:282] 0 containers: []
	W1018 17:46:17.390770   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:17.390778   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:17.390842   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:17.419824   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:17.419846   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:17.419851   51251 cri.go:89] found id: ""
	I1018 17:46:17.419858   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:17.419916   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:17.423710   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:17.427116   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:17.427185   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:17.453579   51251 cri.go:89] found id: ""
	I1018 17:46:17.453602   51251 logs.go:282] 0 containers: []
	W1018 17:46:17.453610   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:17.453617   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:17.453705   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:17.486285   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:17.486309   51251 cri.go:89] found id: ""
	I1018 17:46:17.486318   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:17.486372   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:17.490015   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:17.490104   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:17.518259   51251 cri.go:89] found id: ""
	I1018 17:46:17.518284   51251 logs.go:282] 0 containers: []
	W1018 17:46:17.518292   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:17.518301   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:17.518332   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:17.614000   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:17.614035   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:17.626518   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:17.626553   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:17.684157   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:17.684191   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:17.730343   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:17.730369   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:17.798308   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:17.789990   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.790724   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.792367   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.792674   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.794211   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:17.789990   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.790724   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.792367   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.792674   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.794211   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:17.798326   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:17.798338   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:17.823833   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:17.823857   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:17.865773   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:17.865799   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:17.935865   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:17.935900   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:17.978061   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:17.978088   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:18.006175   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:18.006205   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:20.594229   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:20.605152   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:20.605223   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:20.633212   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:20.633234   51251 cri.go:89] found id: ""
	I1018 17:46:20.633243   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:20.633310   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:20.637046   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:20.637118   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:20.663217   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:20.663238   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:20.663246   51251 cri.go:89] found id: ""
	I1018 17:46:20.663253   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:20.663325   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:20.667226   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:20.670621   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:20.670719   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:20.698213   51251 cri.go:89] found id: ""
	I1018 17:46:20.698235   51251 logs.go:282] 0 containers: []
	W1018 17:46:20.698244   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:20.698287   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:20.698367   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:20.730404   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:20.730434   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:20.730439   51251 cri.go:89] found id: ""
	I1018 17:46:20.730447   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:20.730519   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:20.734442   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:20.738131   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:20.738222   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:20.773079   51251 cri.go:89] found id: ""
	I1018 17:46:20.773149   51251 logs.go:282] 0 containers: []
	W1018 17:46:20.773171   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:20.773193   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:20.773277   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:20.800462   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:20.800534   51251 cri.go:89] found id: ""
	I1018 17:46:20.800569   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:20.800664   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:20.805115   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:20.805213   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:20.830418   51251 cri.go:89] found id: ""
	I1018 17:46:20.830442   51251 logs.go:282] 0 containers: []
	W1018 17:46:20.830451   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:20.830459   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:20.830470   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:20.912043   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:20.912075   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:20.938545   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:20.938572   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:20.977936   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:20.978010   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:21.013920   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:21.013950   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:21.119416   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:21.119450   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:21.132924   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:21.133048   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:21.220628   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:21.211038   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.212205   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.213238   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.213888   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.215798   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:21.211038   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.212205   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.213238   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.213888   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.215798   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:21.220657   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:21.220677   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:21.249593   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:21.249618   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:21.329125   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:21.329162   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:21.387066   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:21.387097   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:23.926900   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:23.937764   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:23.937832   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:23.976069   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:23.976129   51251 cri.go:89] found id: ""
	I1018 17:46:23.976159   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:23.976235   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:23.979863   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:23.979943   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:24.009930   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:24.009950   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:24.009954   51251 cri.go:89] found id: ""
	I1018 17:46:24.009963   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:24.010025   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:24.014274   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:24.018246   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:24.018317   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:24.046546   51251 cri.go:89] found id: ""
	I1018 17:46:24.046571   51251 logs.go:282] 0 containers: []
	W1018 17:46:24.046589   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:24.046596   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:24.046659   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:24.073391   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:24.073411   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:24.073416   51251 cri.go:89] found id: ""
	I1018 17:46:24.073428   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:24.073485   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:24.077447   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:24.081009   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:24.081083   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:24.108804   51251 cri.go:89] found id: ""
	I1018 17:46:24.108828   51251 logs.go:282] 0 containers: []
	W1018 17:46:24.108837   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:24.108843   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:24.108905   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:24.144321   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:24.144348   51251 cri.go:89] found id: ""
	I1018 17:46:24.144357   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:24.144413   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:24.148488   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:24.148592   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:24.176586   51251 cri.go:89] found id: ""
	I1018 17:46:24.176611   51251 logs.go:282] 0 containers: []
	W1018 17:46:24.176619   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:24.176629   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:24.176640   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:24.254257   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:24.245066   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.246406   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.248217   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.248923   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.250447   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:24.245066   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.246406   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.248217   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.248923   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.250447   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:24.254278   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:24.254290   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:24.281646   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:24.281673   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:24.354939   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:24.354974   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:24.383116   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:24.383140   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:24.462892   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:24.462927   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:24.504197   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:24.504228   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:24.562928   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:24.562961   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:24.599399   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:24.599433   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:24.631679   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:24.631746   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:24.732308   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:24.732344   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:27.244674   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:27.255895   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:27.256012   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:27.287040   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:27.287060   51251 cri.go:89] found id: ""
	I1018 17:46:27.287069   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:27.287149   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:27.290894   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:27.290963   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:27.320255   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:27.320275   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:27.320280   51251 cri.go:89] found id: ""
	I1018 17:46:27.320287   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:27.320342   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:27.323980   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:27.327547   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:27.327617   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:27.352735   51251 cri.go:89] found id: ""
	I1018 17:46:27.352759   51251 logs.go:282] 0 containers: []
	W1018 17:46:27.352768   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:27.352774   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:27.352857   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:27.379505   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:27.379527   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:27.379532   51251 cri.go:89] found id: ""
	I1018 17:46:27.379539   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:27.379595   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:27.383294   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:27.386911   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:27.386986   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:27.415912   51251 cri.go:89] found id: ""
	I1018 17:46:27.415934   51251 logs.go:282] 0 containers: []
	W1018 17:46:27.415943   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:27.415949   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:27.416005   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:27.445650   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:27.445672   51251 cri.go:89] found id: ""
	I1018 17:46:27.445682   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:27.445741   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:27.449604   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:27.449704   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:27.484794   51251 cri.go:89] found id: ""
	I1018 17:46:27.484859   51251 logs.go:282] 0 containers: []
	W1018 17:46:27.484882   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:27.484904   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:27.484958   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:27.584293   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:27.584332   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:27.648407   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:27.648440   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:27.676738   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:27.676766   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:27.689349   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:27.689383   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:27.762040   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:27.753582   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.754358   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.756209   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.756792   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.758400   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:27.753582   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.754358   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.756209   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.756792   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.758400   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:27.762060   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:27.762074   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:27.788162   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:27.788190   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:27.822151   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:27.822180   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:27.891958   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:27.891993   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:27.920389   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:27.920413   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:28.000828   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:28.000902   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:30.539090   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:30.549624   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:30.549693   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:30.576191   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:30.576210   51251 cri.go:89] found id: ""
	I1018 17:46:30.576218   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:30.576270   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:30.580032   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:30.580143   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:30.605554   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:30.605576   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:30.605582   51251 cri.go:89] found id: ""
	I1018 17:46:30.605600   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:30.605693   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:30.609432   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:30.613226   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:30.613297   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:30.640206   51251 cri.go:89] found id: ""
	I1018 17:46:30.640232   51251 logs.go:282] 0 containers: []
	W1018 17:46:30.640241   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:30.640248   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:30.640305   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:30.667995   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:30.668022   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:30.668027   51251 cri.go:89] found id: ""
	I1018 17:46:30.668035   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:30.668090   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:30.671800   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:30.675538   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:30.675607   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:30.700530   51251 cri.go:89] found id: ""
	I1018 17:46:30.700554   51251 logs.go:282] 0 containers: []
	W1018 17:46:30.700562   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:30.700568   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:30.700623   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:30.728589   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:30.728610   51251 cri.go:89] found id: ""
	I1018 17:46:30.728618   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:30.728673   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:30.732322   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:30.732414   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:30.757553   51251 cri.go:89] found id: ""
	I1018 17:46:30.757577   51251 logs.go:282] 0 containers: []
	W1018 17:46:30.757586   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:30.757594   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:30.757635   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:30.823888   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:30.816309   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.816862   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.818339   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.818806   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.820240   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:30.816309   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.816862   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.818339   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.818806   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.820240   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:30.823908   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:30.823921   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:30.849213   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:30.849239   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:30.906353   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:30.906387   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:30.995137   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:30.995173   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:31.081727   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:31.081761   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:31.125969   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:31.125994   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:31.232441   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:31.232474   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:31.244403   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:31.244430   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:31.288661   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:31.288704   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:31.322411   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:31.322439   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:33.853119   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:33.864167   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:33.864236   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:33.897397   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:33.897420   51251 cri.go:89] found id: ""
	I1018 17:46:33.897428   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:33.897485   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:33.901240   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:33.901310   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:33.929613   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:33.929646   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:33.929651   51251 cri.go:89] found id: ""
	I1018 17:46:33.929658   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:33.929735   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:33.933312   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:33.936856   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:33.936964   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:33.977530   51251 cri.go:89] found id: ""
	I1018 17:46:33.977558   51251 logs.go:282] 0 containers: []
	W1018 17:46:33.977566   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:33.977573   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:33.977631   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:34.012562   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:34.012584   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:34.012589   51251 cri.go:89] found id: ""
	I1018 17:46:34.012596   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:34.012656   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:34.016474   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:34.020781   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:34.020852   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:34.046987   51251 cri.go:89] found id: ""
	I1018 17:46:34.047014   51251 logs.go:282] 0 containers: []
	W1018 17:46:34.047022   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:34.047029   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:34.047086   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:34.076543   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:34.076564   51251 cri.go:89] found id: ""
	I1018 17:46:34.076575   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:34.076631   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:34.080378   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:34.080449   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:34.107694   51251 cri.go:89] found id: ""
	I1018 17:46:34.107716   51251 logs.go:282] 0 containers: []
	W1018 17:46:34.107724   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:34.107734   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:34.107745   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:34.119659   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:34.119686   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:34.177728   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:34.177831   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:34.238468   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:34.238509   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:34.321582   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:34.321620   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:34.353750   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:34.353776   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:34.384525   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:34.384552   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:34.462817   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:34.462849   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:34.494982   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:34.495010   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:34.598168   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:34.598203   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:34.675787   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:34.666968   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.667733   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.669584   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.670213   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.671781   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:34.666968   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.667733   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.669584   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.670213   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.671781   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:34.675809   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:34.675822   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:37.204073   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:37.217257   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:37.217324   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:37.242870   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:37.242892   51251 cri.go:89] found id: ""
	I1018 17:46:37.242900   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:37.242956   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:37.246583   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:37.246652   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:37.272095   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:37.272157   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:37.272174   51251 cri.go:89] found id: ""
	I1018 17:46:37.272195   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:37.272279   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:37.276536   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:37.280121   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:37.280190   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:37.305151   51251 cri.go:89] found id: ""
	I1018 17:46:37.305173   51251 logs.go:282] 0 containers: []
	W1018 17:46:37.305182   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:37.305188   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:37.305244   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:37.338068   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:37.338137   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:37.338155   51251 cri.go:89] found id: ""
	I1018 17:46:37.338191   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:37.338263   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:37.342725   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:37.346547   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:37.346621   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:37.374074   51251 cri.go:89] found id: ""
	I1018 17:46:37.374095   51251 logs.go:282] 0 containers: []
	W1018 17:46:37.374104   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:37.374110   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:37.374167   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:37.405324   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:37.405346   51251 cri.go:89] found id: ""
	I1018 17:46:37.405360   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:37.405434   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:37.409814   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:37.409899   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:37.435527   51251 cri.go:89] found id: ""
	I1018 17:46:37.435551   51251 logs.go:282] 0 containers: []
	W1018 17:46:37.435560   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:37.435568   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:37.435579   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:37.504448   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:37.496518   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.497134   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.498616   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.499058   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.500376   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:37.496518   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.497134   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.498616   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.499058   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.500376   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:37.504468   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:37.504482   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:37.533375   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:37.533403   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:37.598625   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:37.598661   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:37.634535   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:37.634563   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:37.717277   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:37.717311   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:37.818978   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:37.819016   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:37.832055   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:37.832084   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:37.904377   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:37.904408   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:37.938939   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:37.938966   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:37.981000   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:37.981027   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:40.513454   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:40.524358   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:40.524437   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:40.552377   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:40.552454   51251 cri.go:89] found id: ""
	I1018 17:46:40.552475   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:40.552563   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:40.556445   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:40.556565   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:40.582695   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:40.582726   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:40.582732   51251 cri.go:89] found id: ""
	I1018 17:46:40.582739   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:40.582814   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:40.586779   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:40.590379   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:40.590449   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:40.618010   51251 cri.go:89] found id: ""
	I1018 17:46:40.618034   51251 logs.go:282] 0 containers: []
	W1018 17:46:40.618050   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:40.618056   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:40.618113   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:40.648753   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:40.648776   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:40.648782   51251 cri.go:89] found id: ""
	I1018 17:46:40.648790   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:40.648848   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:40.652681   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:40.656399   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:40.656475   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:40.682133   51251 cri.go:89] found id: ""
	I1018 17:46:40.682157   51251 logs.go:282] 0 containers: []
	W1018 17:46:40.682165   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:40.682180   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:40.682236   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:40.709218   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:40.709242   51251 cri.go:89] found id: ""
	I1018 17:46:40.709250   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:40.709309   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:40.713679   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:40.713762   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:40.739858   51251 cri.go:89] found id: ""
	I1018 17:46:40.739881   51251 logs.go:282] 0 containers: []
	W1018 17:46:40.739889   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:40.739899   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:40.739910   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:40.767013   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:40.767039   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:40.815169   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:40.815198   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:40.828097   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:40.828174   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:40.854852   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:40.854880   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:40.928587   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:40.928623   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:40.967185   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:40.967264   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:41.043445   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:41.043480   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:41.073682   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:41.073706   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:41.167926   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:41.167960   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:41.279975   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:41.280011   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:41.354826   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:41.337935   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.339488   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.340251   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.347202   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.347805   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:41.337935   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.339488   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.340251   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.347202   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.347805   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:43.856192   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:43.867961   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:43.868072   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:43.894221   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:43.894243   51251 cri.go:89] found id: ""
	I1018 17:46:43.894252   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:43.894332   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:43.898170   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:43.898263   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:43.925956   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:43.926031   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:43.926050   51251 cri.go:89] found id: ""
	I1018 17:46:43.926070   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:43.926142   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:43.929746   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:43.933185   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:43.933255   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:43.959602   51251 cri.go:89] found id: ""
	I1018 17:46:43.959627   51251 logs.go:282] 0 containers: []
	W1018 17:46:43.959635   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:43.959647   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:43.959704   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:43.991256   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:43.991325   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:43.991354   51251 cri.go:89] found id: ""
	I1018 17:46:43.991375   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:43.991457   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:43.995372   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:43.999083   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:43.999191   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:44.027597   51251 cri.go:89] found id: ""
	I1018 17:46:44.027632   51251 logs.go:282] 0 containers: []
	W1018 17:46:44.027641   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:44.027647   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:44.027715   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:44.055061   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:44.055085   51251 cri.go:89] found id: ""
	I1018 17:46:44.055094   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:44.055163   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:44.059234   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:44.059339   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:44.087631   51251 cri.go:89] found id: ""
	I1018 17:46:44.087653   51251 logs.go:282] 0 containers: []
	W1018 17:46:44.087661   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:44.087670   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:44.087681   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:44.189442   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:44.189477   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:44.218935   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:44.218961   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:44.286708   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:44.286746   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:44.321434   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:44.321463   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:44.399455   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:44.399492   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:44.434475   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:44.434502   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:44.448230   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:44.448256   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:44.523028   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:44.515201   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.515969   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.517455   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.517964   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.519503   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:44.515201   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.515969   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.517455   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.517964   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.519503   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:44.523047   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:44.523060   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:44.559772   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:44.559799   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:44.632864   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:44.632968   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:47.163147   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:47.174684   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:47.174753   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:47.212548   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:47.212575   51251 cri.go:89] found id: ""
	I1018 17:46:47.212583   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:47.212638   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:47.216970   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:47.217043   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:47.246472   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:47.246547   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:47.246565   51251 cri.go:89] found id: ""
	I1018 17:46:47.246585   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:47.246669   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:47.252448   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:47.255988   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:47.256113   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:47.287109   51251 cri.go:89] found id: ""
	I1018 17:46:47.287134   51251 logs.go:282] 0 containers: []
	W1018 17:46:47.287144   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:47.287150   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:47.287211   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:47.316914   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:47.316964   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:47.316969   51251 cri.go:89] found id: ""
	I1018 17:46:47.316977   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:47.317032   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:47.320849   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:47.324385   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:47.324455   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:47.351869   51251 cri.go:89] found id: ""
	I1018 17:46:47.351894   51251 logs.go:282] 0 containers: []
	W1018 17:46:47.351902   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:47.351908   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:47.351963   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:47.378692   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:47.378712   51251 cri.go:89] found id: ""
	I1018 17:46:47.378720   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:47.378773   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:47.382267   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:47.382341   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:47.417848   51251 cri.go:89] found id: ""
	I1018 17:46:47.417914   51251 logs.go:282] 0 containers: []
	W1018 17:46:47.417928   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:47.417938   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:47.417953   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:47.515489   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:47.515527   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:47.598137   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:47.585088   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.586210   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.586811   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.592142   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.592951   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:47.585088   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.586210   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.586811   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.592142   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.592951   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:47.598159   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:47.598172   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:47.627147   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:47.627171   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:47.685715   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:47.685749   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:47.729509   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:47.729542   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:47.802620   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:47.802658   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:47.841366   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:47.841393   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:47.853500   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:47.853528   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:47.882085   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:47.882112   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:47.962102   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:47.962182   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:50.497378   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:50.509438   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:50.509515   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:50.536827   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:50.536845   51251 cri.go:89] found id: ""
	I1018 17:46:50.536853   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:50.536906   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:50.540656   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:50.540736   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:50.572295   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:50.572315   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:50.572319   51251 cri.go:89] found id: ""
	I1018 17:46:50.572326   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:50.572381   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:50.576114   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:50.579678   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:50.579767   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:50.604801   51251 cri.go:89] found id: ""
	I1018 17:46:50.604883   51251 logs.go:282] 0 containers: []
	W1018 17:46:50.604907   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:50.604953   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:50.605039   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:50.630628   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:50.630689   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:50.630709   51251 cri.go:89] found id: ""
	I1018 17:46:50.630731   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:50.630799   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:50.634652   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:50.638142   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:50.638211   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:50.668081   51251 cri.go:89] found id: ""
	I1018 17:46:50.668158   51251 logs.go:282] 0 containers: []
	W1018 17:46:50.668178   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:50.668199   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:50.668286   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:50.695569   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:50.695633   51251 cri.go:89] found id: ""
	I1018 17:46:50.695655   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:50.695739   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:50.699470   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:50.699542   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:50.727412   51251 cri.go:89] found id: ""
	I1018 17:46:50.727436   51251 logs.go:282] 0 containers: []
	W1018 17:46:50.727445   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:50.727454   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:50.727467   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:50.753408   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:50.753435   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:50.827768   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:50.827848   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:50.859978   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:50.860003   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:50.939527   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:50.939561   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:50.980682   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:50.980711   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:51.076628   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:51.076663   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:51.090191   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:51.090220   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:51.182260   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:51.173917   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.174843   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.176369   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.176776   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.178414   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:51.173917   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.174843   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.176369   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.176776   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.178414   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:51.182283   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:51.182295   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:51.232720   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:51.232749   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:51.308144   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:51.308178   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:53.837977   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:53.848545   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:53.848614   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:53.876495   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:53.876519   51251 cri.go:89] found id: ""
	I1018 17:46:53.876528   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:53.876595   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:53.880322   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:53.880394   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:53.907168   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:53.907231   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:53.907249   51251 cri.go:89] found id: ""
	I1018 17:46:53.907272   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:53.907357   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:53.911597   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:53.914987   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:53.915059   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:53.940518   51251 cri.go:89] found id: ""
	I1018 17:46:53.940542   51251 logs.go:282] 0 containers: []
	W1018 17:46:53.940551   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:53.940557   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:53.940616   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:53.978433   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:53.978457   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:53.978462   51251 cri.go:89] found id: ""
	I1018 17:46:53.978469   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:53.978524   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:53.982381   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:53.985948   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:53.986022   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:54.015365   51251 cri.go:89] found id: ""
	I1018 17:46:54.015389   51251 logs.go:282] 0 containers: []
	W1018 17:46:54.015403   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:54.015410   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:54.015469   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:54.043566   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:54.043585   51251 cri.go:89] found id: ""
	I1018 17:46:54.043594   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:54.043652   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:54.047469   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:54.047537   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:54.074756   51251 cri.go:89] found id: ""
	I1018 17:46:54.074779   51251 logs.go:282] 0 containers: []
	W1018 17:46:54.074788   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:54.074797   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:54.074836   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:54.105299   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:54.105329   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:54.181466   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:54.181501   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:54.274419   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:54.274455   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:54.312879   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:54.312907   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:54.417669   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:54.417744   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:54.429755   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:54.429780   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:54.498834   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:54.489425   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.491045   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.492004   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.493115   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.494863   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:54.489425   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.491045   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.492004   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.493115   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.494863   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:54.498906   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:54.498927   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:54.527210   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:54.527238   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:54.569700   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:54.569732   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:54.644529   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:54.644561   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:57.172362   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:57.183486   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:57.183556   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:57.221818   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:57.221836   51251 cri.go:89] found id: ""
	I1018 17:46:57.221844   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:57.221899   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:57.225454   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:57.225520   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:57.252169   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:57.252192   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:57.252197   51251 cri.go:89] found id: ""
	I1018 17:46:57.252206   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:57.252263   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:57.256351   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:57.259722   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:57.259804   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:57.286504   51251 cri.go:89] found id: ""
	I1018 17:46:57.286527   51251 logs.go:282] 0 containers: []
	W1018 17:46:57.286536   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:57.286542   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:57.286603   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:57.314232   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:57.314254   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:57.314259   51251 cri.go:89] found id: ""
	I1018 17:46:57.314267   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:57.314322   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:57.317847   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:57.320999   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:57.321074   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:57.346974   51251 cri.go:89] found id: ""
	I1018 17:46:57.346999   51251 logs.go:282] 0 containers: []
	W1018 17:46:57.347008   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:57.347014   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:57.347069   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:57.373499   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:57.373567   51251 cri.go:89] found id: ""
	I1018 17:46:57.373587   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:57.373664   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:57.377584   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:57.377703   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:57.407749   51251 cri.go:89] found id: ""
	I1018 17:46:57.407773   51251 logs.go:282] 0 containers: []
	W1018 17:46:57.407782   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:57.407790   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:57.407801   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:57.420407   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:57.420432   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:57.450356   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:57.450384   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:57.487363   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:57.487394   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:57.580373   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:57.580410   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:57.617494   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:57.617524   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:57.719190   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:57.719227   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:57.790068   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:57.780054   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.780444   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.782856   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.783240   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.785433   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:57.780054   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.780444   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.782856   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.783240   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.785433   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:57.790090   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:57.790104   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:57.849803   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:57.849835   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:57.881569   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:57.881600   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:57.911940   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:57.911966   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:00.495334   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:00.507616   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:00.507694   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:00.539238   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:00.539258   51251 cri.go:89] found id: ""
	I1018 17:47:00.539266   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:00.539323   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:00.543503   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:00.543571   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:00.574079   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:00.574112   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:00.574118   51251 cri.go:89] found id: ""
	I1018 17:47:00.574126   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:00.574199   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:00.578461   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:00.582394   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:00.582473   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:00.609898   51251 cri.go:89] found id: ""
	I1018 17:47:00.609973   51251 logs.go:282] 0 containers: []
	W1018 17:47:00.610004   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:00.610017   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:00.610086   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:00.637367   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:00.637388   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:00.637393   51251 cri.go:89] found id: ""
	I1018 17:47:00.637400   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:00.637464   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:00.641319   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:00.644789   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:00.644895   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:00.672435   51251 cri.go:89] found id: ""
	I1018 17:47:00.672467   51251 logs.go:282] 0 containers: []
	W1018 17:47:00.672476   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:00.672498   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:00.672580   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:00.699455   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:00.699483   51251 cri.go:89] found id: ""
	I1018 17:47:00.699492   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:00.699583   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:00.703264   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:00.703360   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:00.728880   51251 cri.go:89] found id: ""
	I1018 17:47:00.728902   51251 logs.go:282] 0 containers: []
	W1018 17:47:00.728909   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:00.728919   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:00.728930   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:00.823491   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:00.823527   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:00.902015   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:00.902048   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:00.934461   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:00.934491   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:00.946667   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:00.946693   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:01.028399   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:01.020279   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.020921   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.022494   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.023037   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.024610   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:01.020279   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.020921   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.022494   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.023037   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.024610   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:01.028462   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:01.028491   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:01.054806   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:01.054833   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:01.113787   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:01.113863   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:01.158354   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:01.158386   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:01.240342   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:01.240377   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:01.271277   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:01.271308   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:03.801529   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:03.812492   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:03.812565   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:03.840023   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:03.840046   51251 cri.go:89] found id: ""
	I1018 17:47:03.840054   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:03.840107   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:03.844123   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:03.844199   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:03.871286   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:03.871312   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:03.871317   51251 cri.go:89] found id: ""
	I1018 17:47:03.871325   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:03.871393   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:03.875415   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:03.879340   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:03.879454   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:03.907561   51251 cri.go:89] found id: ""
	I1018 17:47:03.907586   51251 logs.go:282] 0 containers: []
	W1018 17:47:03.907595   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:03.907602   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:03.907685   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:03.933344   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:03.933418   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:03.933445   51251 cri.go:89] found id: ""
	I1018 17:47:03.933467   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:03.933532   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:03.937202   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:03.940624   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:03.940692   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:03.976333   51251 cri.go:89] found id: ""
	I1018 17:47:03.976360   51251 logs.go:282] 0 containers: []
	W1018 17:47:03.976369   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:03.976375   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:03.976431   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:04.003969   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:04.003993   51251 cri.go:89] found id: ""
	I1018 17:47:04.004002   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:04.004073   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:04.008851   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:04.008931   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:04.043815   51251 cri.go:89] found id: ""
	I1018 17:47:04.043837   51251 logs.go:282] 0 containers: []
	W1018 17:47:04.043845   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:04.043854   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:04.043866   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:04.103935   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:04.103972   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:04.197102   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:04.197140   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:04.232873   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:04.232903   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:04.308823   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:04.308859   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:04.340563   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:04.340591   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:04.411725   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:04.402979   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.403733   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.405382   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.405957   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.407619   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:04.402979   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.403733   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.405382   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.405957   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.407619   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:04.411746   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:04.411758   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:04.436986   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:04.437017   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:04.474563   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:04.474599   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:04.508182   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:04.508207   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:04.612203   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:04.612245   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:07.124391   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:07.136931   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:07.137030   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:07.162931   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:07.162951   51251 cri.go:89] found id: ""
	I1018 17:47:07.162960   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:07.163014   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:07.166802   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:07.166873   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:07.194647   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:07.194666   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:07.194671   51251 cri.go:89] found id: ""
	I1018 17:47:07.194679   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:07.194732   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:07.198306   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:07.202321   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:07.202393   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:07.236779   51251 cri.go:89] found id: ""
	I1018 17:47:07.236804   51251 logs.go:282] 0 containers: []
	W1018 17:47:07.236813   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:07.236819   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:07.236876   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:07.266781   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:07.266801   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:07.266806   51251 cri.go:89] found id: ""
	I1018 17:47:07.266813   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:07.266867   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:07.270559   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:07.275186   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:07.275286   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:07.304386   51251 cri.go:89] found id: ""
	I1018 17:47:07.304423   51251 logs.go:282] 0 containers: []
	W1018 17:47:07.304454   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:07.304462   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:07.304540   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:07.333196   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:07.333220   51251 cri.go:89] found id: ""
	I1018 17:47:07.333228   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:07.333322   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:07.338348   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:07.338462   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:07.366271   51251 cri.go:89] found id: ""
	I1018 17:47:07.366343   51251 logs.go:282] 0 containers: []
	W1018 17:47:07.366364   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:07.366379   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:07.366391   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:07.468507   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:07.468585   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:07.529687   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:07.529725   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:07.565649   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:07.565779   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:07.596211   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:07.596237   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:07.615230   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:07.615299   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:07.692829   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:07.685395   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.685775   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.687235   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.687549   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.689030   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:07.685395   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.685775   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.687235   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.687549   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.689030   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:07.692899   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:07.692930   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:07.718952   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:07.719025   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:07.795561   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:07.795598   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:07.824250   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:07.824280   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:07.906836   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:07.906868   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:10.439981   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:10.451479   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:10.451545   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:10.480101   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:10.480123   51251 cri.go:89] found id: ""
	I1018 17:47:10.480132   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:10.480190   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:10.483904   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:10.484019   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:10.514873   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:10.514897   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:10.514902   51251 cri.go:89] found id: ""
	I1018 17:47:10.514910   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:10.514966   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:10.518574   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:10.522267   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:10.522379   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:10.550236   51251 cri.go:89] found id: ""
	I1018 17:47:10.550300   51251 logs.go:282] 0 containers: []
	W1018 17:47:10.550324   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:10.550343   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:10.550419   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:10.576542   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:10.576564   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:10.576569   51251 cri.go:89] found id: ""
	I1018 17:47:10.576576   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:10.576631   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:10.580343   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:10.583810   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:10.583876   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:10.608923   51251 cri.go:89] found id: ""
	I1018 17:47:10.608997   51251 logs.go:282] 0 containers: []
	W1018 17:47:10.609009   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:10.609016   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:10.609083   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:10.640901   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:10.640997   51251 cri.go:89] found id: ""
	I1018 17:47:10.641019   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:10.641104   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:10.644777   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:10.644898   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:10.686801   51251 cri.go:89] found id: ""
	I1018 17:47:10.686867   51251 logs.go:282] 0 containers: []
	W1018 17:47:10.686888   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:10.686902   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:10.686913   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:10.790476   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:10.790513   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:10.866774   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:10.866808   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:10.896066   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:10.896092   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:10.977137   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:10.977170   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:11.028633   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:11.028664   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:11.040841   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:11.040870   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:11.108732   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:11.100472   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.101171   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.102909   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.103502   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.105204   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:11.100472   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.101171   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.102909   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.103502   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.105204   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:11.108754   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:11.108767   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:11.142956   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:11.142982   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:11.203085   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:11.203120   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:11.245548   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:11.245582   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:13.780727   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:13.792098   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:13.792166   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:13.819543   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:13.819564   51251 cri.go:89] found id: ""
	I1018 17:47:13.819571   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:13.819627   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:13.823882   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:13.823951   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:13.849465   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:13.849495   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:13.849501   51251 cri.go:89] found id: ""
	I1018 17:47:13.849508   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:13.849563   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:13.853400   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:13.856833   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:13.856907   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:13.886459   51251 cri.go:89] found id: ""
	I1018 17:47:13.886482   51251 logs.go:282] 0 containers: []
	W1018 17:47:13.886502   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:13.886509   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:13.886576   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:13.914771   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:13.914840   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:13.914859   51251 cri.go:89] found id: ""
	I1018 17:47:13.914884   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:13.914961   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:13.919618   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:13.923284   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:13.923358   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:13.970811   51251 cri.go:89] found id: ""
	I1018 17:47:13.970833   51251 logs.go:282] 0 containers: []
	W1018 17:47:13.970841   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:13.970848   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:13.970905   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:13.997307   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:13.997333   51251 cri.go:89] found id: ""
	I1018 17:47:13.997341   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:13.997406   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:14.001258   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:14.001421   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:14.031834   51251 cri.go:89] found id: ""
	I1018 17:47:14.031908   51251 logs.go:282] 0 containers: []
	W1018 17:47:14.031930   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:14.031952   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:14.031991   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:14.115427   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:14.115472   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:14.155640   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:14.155675   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:14.260678   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:14.260712   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:14.299224   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:14.299256   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:14.328160   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:14.328189   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:14.402362   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:14.402396   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:14.436253   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:14.436279   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:14.448030   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:14.448054   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:14.523971   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:14.516092   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.516475   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.517978   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.518298   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.519757   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:14.516092   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.516475   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.517978   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.518298   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.519757   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:14.523992   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:14.524003   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:14.553496   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:14.553520   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:17.135556   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:17.147008   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:17.147074   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:17.173389   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:17.173409   51251 cri.go:89] found id: ""
	I1018 17:47:17.173417   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:17.173471   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:17.177579   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:17.177651   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:17.203627   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:17.203645   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:17.203650   51251 cri.go:89] found id: ""
	I1018 17:47:17.203657   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:17.203710   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:17.207344   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:17.217855   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:17.217930   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:17.249063   51251 cri.go:89] found id: ""
	I1018 17:47:17.249089   51251 logs.go:282] 0 containers: []
	W1018 17:47:17.249098   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:17.249105   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:17.249168   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:17.277163   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:17.277181   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:17.277186   51251 cri.go:89] found id: ""
	I1018 17:47:17.277193   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:17.277248   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:17.282612   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:17.286495   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:17.286569   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:17.319307   51251 cri.go:89] found id: ""
	I1018 17:47:17.319375   51251 logs.go:282] 0 containers: []
	W1018 17:47:17.319398   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:17.319410   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:17.319486   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:17.346484   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:17.346554   51251 cri.go:89] found id: ""
	I1018 17:47:17.346580   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:17.346657   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:17.350475   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:17.350550   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:17.377839   51251 cri.go:89] found id: ""
	I1018 17:47:17.377902   51251 logs.go:282] 0 containers: []
	W1018 17:47:17.377922   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:17.377931   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:17.377943   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:17.404392   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:17.404417   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:17.465336   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:17.465374   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:17.544540   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:17.544575   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:17.578410   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:17.578440   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:17.622849   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:17.622874   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:17.651286   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:17.651315   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:17.729896   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:17.729933   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:17.762097   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:17.762131   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:17.860291   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:17.860324   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:17.873306   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:17.873333   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:17.956831   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:17.948399   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.948817   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.950652   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.951205   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.953012   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:17.948399   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.948817   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.950652   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.951205   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.953012   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:20.457766   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:20.468306   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:20.468375   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:20.502498   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:20.502519   51251 cri.go:89] found id: ""
	I1018 17:47:20.502527   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:20.502581   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:20.506455   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:20.506526   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:20.533813   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:20.533831   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:20.533836   51251 cri.go:89] found id: ""
	I1018 17:47:20.533844   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:20.533897   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:20.537754   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:20.541481   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:20.541549   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:20.567040   51251 cri.go:89] found id: ""
	I1018 17:47:20.567063   51251 logs.go:282] 0 containers: []
	W1018 17:47:20.567071   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:20.567078   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:20.567139   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:20.596640   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:20.596661   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:20.596666   51251 cri.go:89] found id: ""
	I1018 17:47:20.596674   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:20.596729   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:20.600667   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:20.604504   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:20.604571   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:20.636801   51251 cri.go:89] found id: ""
	I1018 17:47:20.636826   51251 logs.go:282] 0 containers: []
	W1018 17:47:20.636835   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:20.636841   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:20.636919   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:20.663088   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:20.663107   51251 cri.go:89] found id: ""
	I1018 17:47:20.663120   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:20.663175   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:20.666758   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:20.666830   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:20.693183   51251 cri.go:89] found id: ""
	I1018 17:47:20.693205   51251 logs.go:282] 0 containers: []
	W1018 17:47:20.693214   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:20.693223   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:20.693233   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:20.759707   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:20.751450   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.752024   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.753590   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.754259   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.755733   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:20.751450   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.752024   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.753590   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.754259   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.755733   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:20.759728   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:20.759743   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:20.820356   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:20.820393   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:20.855109   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:20.855142   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:20.933430   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:20.933470   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:20.961931   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:20.961959   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:21.002517   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:21.002558   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:21.019433   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:21.019511   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:21.047420   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:21.047495   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:21.079819   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:21.079893   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:21.155722   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:21.155759   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:23.766139   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:23.777085   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:23.777151   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:23.811684   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:23.811707   51251 cri.go:89] found id: ""
	I1018 17:47:23.811715   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:23.811770   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:23.817453   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:23.817525   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:23.844121   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:23.844141   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:23.844146   51251 cri.go:89] found id: ""
	I1018 17:47:23.844153   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:23.844213   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:23.847866   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:23.851438   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:23.851510   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:23.879002   51251 cri.go:89] found id: ""
	I1018 17:47:23.879067   51251 logs.go:282] 0 containers: []
	W1018 17:47:23.879082   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:23.879089   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:23.879148   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:23.905700   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:23.905722   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:23.905727   51251 cri.go:89] found id: ""
	I1018 17:47:23.905735   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:23.905838   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:23.909628   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:23.913950   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:23.914019   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:23.946272   51251 cri.go:89] found id: ""
	I1018 17:47:23.946347   51251 logs.go:282] 0 containers: []
	W1018 17:47:23.946362   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:23.946370   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:23.946428   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:23.982078   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:23.982100   51251 cri.go:89] found id: ""
	I1018 17:47:23.982109   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:23.982162   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:23.985823   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:23.985895   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:24.020838   51251 cri.go:89] found id: ""
	I1018 17:47:24.020863   51251 logs.go:282] 0 containers: []
	W1018 17:47:24.020872   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:24.020881   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:24.020895   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:24.049680   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:24.049704   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:24.114947   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:24.114984   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:24.157780   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:24.157811   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:24.187365   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:24.187391   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:24.272125   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:24.264460   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.265126   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.266121   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.266734   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.268444   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:24.264460   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.265126   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.266121   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.266734   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.268444   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:24.272150   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:24.272162   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:24.351210   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:24.351246   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:24.379627   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:24.379654   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:24.459957   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:24.459991   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:24.490809   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:24.490834   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:24.594421   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:24.594457   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:27.106652   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:27.118797   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:27.118867   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:27.156694   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:27.156714   51251 cri.go:89] found id: ""
	I1018 17:47:27.156723   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:27.156776   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:27.160480   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:27.160550   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:27.187759   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:27.187780   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:27.187785   51251 cri.go:89] found id: ""
	I1018 17:47:27.187793   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:27.187855   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:27.191713   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:27.195093   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:27.195159   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:27.231641   51251 cri.go:89] found id: ""
	I1018 17:47:27.231663   51251 logs.go:282] 0 containers: []
	W1018 17:47:27.231671   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:27.231681   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:27.231737   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:27.259596   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:27.259614   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:27.259619   51251 cri.go:89] found id: ""
	I1018 17:47:27.259626   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:27.259678   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:27.263281   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:27.266728   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:27.266826   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:27.294104   51251 cri.go:89] found id: ""
	I1018 17:47:27.294127   51251 logs.go:282] 0 containers: []
	W1018 17:47:27.294139   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:27.294145   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:27.294205   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:27.321776   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:27.321798   51251 cri.go:89] found id: ""
	I1018 17:47:27.321806   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:27.321868   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:27.325558   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:27.325631   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:27.356639   51251 cri.go:89] found id: ""
	I1018 17:47:27.356666   51251 logs.go:282] 0 containers: []
	W1018 17:47:27.356674   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:27.356683   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:27.356694   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:27.462575   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:27.462610   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:27.529536   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:27.520733   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.521424   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.523093   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.523552   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.525157   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:27.520733   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.521424   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.523093   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.523552   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.525157   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:27.529559   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:27.529573   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:27.555154   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:27.555180   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:27.632084   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:27.632117   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:27.662590   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:27.662614   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:27.691692   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:27.691718   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:27.774358   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:27.774393   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:27.825515   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:27.825545   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:27.838343   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:27.838369   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:27.902992   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:27.903025   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:30.448737   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:30.460318   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:30.460398   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:30.488282   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:30.488306   51251 cri.go:89] found id: ""
	I1018 17:47:30.488314   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:30.488367   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:30.491908   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:30.491974   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:30.521041   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:30.521066   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:30.521071   51251 cri.go:89] found id: ""
	I1018 17:47:30.521079   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:30.521136   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:30.525103   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:30.528840   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:30.528916   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:30.562515   51251 cri.go:89] found id: ""
	I1018 17:47:30.562537   51251 logs.go:282] 0 containers: []
	W1018 17:47:30.562545   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:30.562551   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:30.562627   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:30.592562   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:30.592584   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:30.592589   51251 cri.go:89] found id: ""
	I1018 17:47:30.592596   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:30.592653   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:30.596706   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:30.600570   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:30.600692   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:30.627771   51251 cri.go:89] found id: ""
	I1018 17:47:30.627793   51251 logs.go:282] 0 containers: []
	W1018 17:47:30.627802   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:30.627808   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:30.627867   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:30.654477   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:30.654497   51251 cri.go:89] found id: ""
	I1018 17:47:30.654510   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:30.654565   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:30.658617   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:30.658686   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:30.689627   51251 cri.go:89] found id: ""
	I1018 17:47:30.689650   51251 logs.go:282] 0 containers: []
	W1018 17:47:30.689658   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:30.689667   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:30.689684   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:30.721050   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:30.721077   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:30.732370   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:30.732446   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:30.805446   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:30.796158   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.796640   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.798623   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.799026   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.800608   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:30.796158   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.796640   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.798623   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.799026   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.800608   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:30.805466   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:30.805478   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:30.830998   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:30.831024   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:30.906775   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:30.906811   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:30.940644   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:30.940671   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:31.026053   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:31.026089   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:31.137923   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:31.137966   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:31.233631   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:31.233668   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:31.264350   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:31.264374   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:33.793612   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:33.805648   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:33.805780   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:33.839954   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:33.840025   51251 cri.go:89] found id: ""
	I1018 17:47:33.840058   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:33.840138   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:33.844129   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:33.844243   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:33.871384   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:33.871408   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:33.871413   51251 cri.go:89] found id: ""
	I1018 17:47:33.871421   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:33.871476   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:33.875651   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:33.879420   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:33.879516   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:33.905649   51251 cri.go:89] found id: ""
	I1018 17:47:33.905676   51251 logs.go:282] 0 containers: []
	W1018 17:47:33.905684   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:33.905691   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:33.905749   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:33.934660   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:33.934683   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:33.934688   51251 cri.go:89] found id: ""
	I1018 17:47:33.934696   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:33.934780   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:33.938842   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:33.942670   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:33.942738   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:33.978544   51251 cri.go:89] found id: ""
	I1018 17:47:33.978568   51251 logs.go:282] 0 containers: []
	W1018 17:47:33.978576   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:33.978582   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:33.978643   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:34.012312   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:34.012389   51251 cri.go:89] found id: ""
	I1018 17:47:34.012468   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:34.012564   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:34.016868   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:34.017048   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:34.044577   51251 cri.go:89] found id: ""
	I1018 17:47:34.044648   51251 logs.go:282] 0 containers: []
	W1018 17:47:34.044668   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:34.044692   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:34.044729   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:34.072731   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:34.072799   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:34.103949   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:34.103978   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:34.117148   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:34.117176   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:34.197560   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:34.184268   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.184883   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.186363   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.186832   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.188578   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:34.184268   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.184883   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.186363   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.186832   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.188578   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:34.197584   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:34.197598   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:34.271679   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:34.271712   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:34.306656   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:34.306683   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:34.386272   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:34.386308   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:34.414077   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:34.414108   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:34.443807   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:34.443833   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:34.522683   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:34.522719   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:37.133400   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:37.147181   51251 out.go:203] 
	W1018 17:47:37.150020   51251 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1018 17:47:37.150063   51251 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1018 17:47:37.150073   51251 out.go:285] * Related issues:
	W1018 17:47:37.150088   51251 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1018 17:47:37.150102   51251 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1018 17:47:37.152991   51251 out.go:203] 
	
	
	==> CRI-O <==
	Oct 18 17:42:09 ha-181800 crio[664]: time="2025-10-18T17:42:09.20257717Z" level=info msg="Started container" PID=1382 containerID=20677c7e60d1996e5ef30701c2fa483c048319a013425dfed6187c287c0356bf description=kube-system/kindnet-72mvm/kindnet-cni id=83e6058c-c5b8-448d-b3d7-5186691986a4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9a75bfa4304b7995fa070b07859898cd617fcbbbf769fcdbda120cb3da5f1690
	Oct 18 17:42:09 ha-181800 crio[664]: time="2025-10-18T17:42:09.208099281Z" level=info msg="Started container" PID=1383 containerID=53b6059c5f00ad29bd734722047caa1917ada2ed5ac7284628e49ffa30dab92f description=kube-system/coredns-66bc5c9577-p7nbg/coredns id=943df95e-dbb8-484a-8f2a-243495bd2d36 name=/runtime.v1.RuntimeService/StartContainer sandboxID=399a3f557e994a4d64c7e77bfa57fcb97dec3f4f1b2ef3d5dcc06e92031fff33
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.111678023Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1ee0e455-5885-424a-be70-f38c74ac9b88 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.113151329Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7332cd08-d810-418f-9239-f994866438d4 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.115024796Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d765eb2e-c860-4fae-a3f2-643ee4144808 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.11532002Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.119986301Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.120167292Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/794fca1f203edd67ad13c746b10dd2dcd8837f7ca0cf411e1437cb8975c5cb1d/merged/etc/passwd: no such file or directory"
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.120189134Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/794fca1f203edd67ad13c746b10dd2dcd8837f7ca0cf411e1437cb8975c5cb1d/merged/etc/group: no such file or directory"
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.120431935Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.145840056Z" level=info msg="Created container a443aed43e21dadb519c5e91013a1d8eb554ae8abd04f5107863e313e372bdc7: kube-system/storage-provisioner/storage-provisioner" id=d765eb2e-c860-4fae-a3f2-643ee4144808 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.146767329Z" level=info msg="Starting container: a443aed43e21dadb519c5e91013a1d8eb554ae8abd04f5107863e313e372bdc7" id=7f29d364-0d5e-4652-9da1-74e15b27ef77 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.148484142Z" level=info msg="Started container" PID=1447 containerID=a443aed43e21dadb519c5e91013a1d8eb554ae8abd04f5107863e313e372bdc7 description=kube-system/storage-provisioner/storage-provisioner id=7f29d364-0d5e-4652-9da1-74e15b27ef77 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c018680cc61b2fa252ffde6cc7588c2be7ef28b3a444122d3feed4e3f9e480f5
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.512333091Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.516220368Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.516254731Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.516276286Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.51949706Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.51953286Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.519558739Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.523529282Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.52356175Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.523584117Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.526772128Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.526803677Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	a443aed43e21d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Running             storage-provisioner       1                   c018680cc61b2       storage-provisioner                 kube-system
	53b6059c5f00a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Running             coredns                   1                   399a3f557e994       coredns-66bc5c9577-p7nbg            kube-system
	20677c7e60d19       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 minutes ago       Running             kindnet-cni               1                   9a75bfa4304b7       kindnet-72mvm                       kube-system
	f24a57e28db5a       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   5 minutes ago       Running             busybox                   1                   5e71cad12b779       busybox-7b57f96db7-fbwpv            default
	2e4a1f13e1162       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 minutes ago       Running             kube-proxy                1                   7fecbfb4c17d9       kube-proxy-stgvm                    kube-system
	2c69476db7a72       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Running             coredns                   1                   578310fdfac47       coredns-66bc5c9577-f6v2w            kube-system
	96f0fa2b71bea       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   5 minutes ago       Running             kube-controller-manager   4                   6555f89f5d7b8       kube-controller-manager-ha-181800   kube-system
	3c32a11f94c33       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Running             kube-apiserver            4                   e20726c2a8ebb       kube-apiserver-ha-181800            kube-system
	1ffdfbb5e9622       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Exited              kube-controller-manager   3                   6555f89f5d7b8       kube-controller-manager-ha-181800   kube-system
	933870b5e9434       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Exited              kube-apiserver            3                   e20726c2a8ebb       kube-apiserver-ha-181800            kube-system
	dda012a63c45a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   7 minutes ago       Running             etcd                      1                   41b759ba439df       etcd-ha-181800                      kube-system
	ac8ef32697a35       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   7 minutes ago       Running             kube-vip                  0                   a52c5b125e763       kube-vip-ha-181800                  kube-system
	6e9b6c2f0e69c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   7 minutes ago       Running             kube-scheduler            1                   44df15c75598f       kube-scheduler-ha-181800            kube-system
	
	
	==> coredns [2c69476db7a72cef87d583347c986806259d1f8ec4d34537de08f030eed150f5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54621 - 11724 "HINFO IN 6166212655013536567.4042456242834438062. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026635361s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [53b6059c5f00ad29bd734722047caa1917ada2ed5ac7284628e49ffa30dab92f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36574 - 3492 "HINFO IN 4503061436688671475.4348845373689282768. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02623671s
	
	
	==> describe nodes <==
	Name:               ha-181800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T17_33_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:33:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:47:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 17:46:57 +0000   Sat, 18 Oct 2025 17:33:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 17:46:57 +0000   Sat, 18 Oct 2025 17:33:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 17:46:57 +0000   Sat, 18 Oct 2025 17:33:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 17:46:57 +0000   Sat, 18 Oct 2025 17:34:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-181800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                7dc9b150-98ed-4d4d-b680-5759a1e067a9
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-fbwpv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-f6v2w             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 coredns-66bc5c9577-p7nbg             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-ha-181800                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-72mvm                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-181800             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-181800    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-stgvm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-181800             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-181800                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m47s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m34s                  kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)      kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           14m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-181800 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   RegisteredNode           8m22s                  node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   Starting                 7m48s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m48s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m48s (x8 over 7m48s)  kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m48s (x8 over 7m48s)  kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m48s (x8 over 7m48s)  kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m54s                  node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	
	
	Name:               ha-181800-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T17_34_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:34:02 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:39:26 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 18 Oct 2025 17:39:16 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 18 Oct 2025 17:39:16 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 18 Oct 2025 17:39:16 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 18 Oct 2025 17:39:16 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-181800-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                b2dd8f24-78e0-4eba-8b0c-b12412f7af7d
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-cp9q6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-181800-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-86s8z                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-181800-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-181800-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-dpwpn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-181800-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-181800-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 13m                kube-proxy       
	  Normal   RegisteredNode           13m                node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           12m                node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-181800-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-181800-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  10m (x9 over 10m)  kubelet          Node ha-181800-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeNotReady             9m45s              node-controller  Node ha-181800-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        9m11s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           8m22s              node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           5m54s              node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   NodeNotReady             5m4s               node-controller  Node ha-181800-m02 status is now: NodeNotReady
	
	
	Name:               ha-181800-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T17_35_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:35:18 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:39:12 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 18 Oct 2025 17:38:02 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 18 Oct 2025 17:38:02 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 18 Oct 2025 17:38:02 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 18 Oct 2025 17:38:02 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-181800-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                4a1abf8a-63a3-4737-81ec-1878616c489b
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-lzcbm                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-181800-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-9qbbw                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-181800-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-181800-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-qsqmb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-181800-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-181800-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        12m    kube-proxy       
	  Normal  RegisteredNode  12m    node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal  RegisteredNode  12m    node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal  RegisteredNode  12m    node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal  RegisteredNode  8m22s  node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal  RegisteredNode  5m54s  node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal  NodeNotReady    5m4s   node-controller  Node ha-181800-m03 status is now: NodeNotReady
	
	
	Name:               ha-181800-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T17_36_11_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:36:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:39:13 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 18 Oct 2025 17:38:23 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 18 Oct 2025 17:38:23 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 18 Oct 2025 17:38:23 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 18 Oct 2025 17:38:23 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-181800-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                afc79373-b3a1-4495-8f28-5c3685ad131e
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-88bv7       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-proxy-fj4ww    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    11m (x3 over 11m)  kubelet          Node ha-181800-m04 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m (x3 over 11m)  kubelet          Node ha-181800-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     11m (x3 over 11m)  kubelet          Node ha-181800-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-181800-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m22s              node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           5m54s              node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   NodeNotReady             5m4s               node-controller  Node ha-181800-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Oct18 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014995] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035776] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.808632] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.418900] kauditd_printk_skb: 36 callbacks suppressed
	[Oct18 17:12] overlayfs: idmapped layers are currently not supported
	[  +0.082393] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct18 17:18] overlayfs: idmapped layers are currently not supported
	[Oct18 17:19] overlayfs: idmapped layers are currently not supported
	[Oct18 17:33] overlayfs: idmapped layers are currently not supported
	[ +35.716082] overlayfs: idmapped layers are currently not supported
	[Oct18 17:35] overlayfs: idmapped layers are currently not supported
	[Oct18 17:36] overlayfs: idmapped layers are currently not supported
	[Oct18 17:37] overlayfs: idmapped layers are currently not supported
	[Oct18 17:39] overlayfs: idmapped layers are currently not supported
	[  +3.088699] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [dda012a63c45a5c37a124da696c59f0ac82f51c6728ee30f5a6b3a9df6f28b54] <==
	{"level":"warn","ts":"2025-10-18T17:47:40.971929Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:40.980072Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:40.988622Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:40.996753Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:40.998539Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:41.010141Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:41.010628Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:41.017357Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:41.020639Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:41.030778Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:41.046315Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:41.057263Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:41.061076Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:41.064357Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:41.070117Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:41.079324Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:41.088811Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:41.093676Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:41.096791Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:41.100553Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:41.109669Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:41.110332Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:41.120459Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:41.129447Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:41.177568Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:47:41 up  1:30,  0 user,  load average: 0.45, 0.90, 0.95
	Linux ha-181800 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [20677c7e60d1996e5ef30701c2fa483c048319a013425dfed6187c287c0356bf] <==
	I1018 17:47:09.509954       1 main.go:324] Node ha-181800-m04 has CIDR [10.244.3.0/24] 
	I1018 17:47:19.513044       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:47:19.513142       1 main.go:301] handling current node
	I1018 17:47:19.513180       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 17:47:19.513210       1 main.go:324] Node ha-181800-m02 has CIDR [10.244.1.0/24] 
	I1018 17:47:19.513410       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1018 17:47:19.513447       1 main.go:324] Node ha-181800-m03 has CIDR [10.244.2.0/24] 
	I1018 17:47:19.513554       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 17:47:19.513585       1 main.go:324] Node ha-181800-m04 has CIDR [10.244.3.0/24] 
	I1018 17:47:29.513013       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1018 17:47:29.513108       1 main.go:324] Node ha-181800-m03 has CIDR [10.244.2.0/24] 
	I1018 17:47:29.513281       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 17:47:29.513322       1 main.go:324] Node ha-181800-m04 has CIDR [10.244.3.0/24] 
	I1018 17:47:29.513420       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:47:29.513455       1 main.go:301] handling current node
	I1018 17:47:29.513491       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 17:47:29.513519       1 main.go:324] Node ha-181800-m02 has CIDR [10.244.1.0/24] 
	I1018 17:47:39.513042       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1018 17:47:39.513152       1 main.go:324] Node ha-181800-m03 has CIDR [10.244.2.0/24] 
	I1018 17:47:39.513341       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 17:47:39.513397       1 main.go:324] Node ha-181800-m04 has CIDR [10.244.3.0/24] 
	I1018 17:47:39.513497       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:47:39.513543       1 main.go:301] handling current node
	I1018 17:47:39.513578       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 17:47:39.513607       1 main.go:324] Node ha-181800-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [3c32a11f94c333ae590b8745e77ffbb92367453ca4e6aee44e0e906b14390ca9] <==
	I1018 17:41:42.012115       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 17:41:42.012379       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 17:41:42.012425       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 17:41:42.013814       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 17:41:42.013944       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 17:41:42.025145       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 17:41:42.025992       1 cache.go:39] Caches are synced for autoregister controller
	I1018 17:41:42.026156       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 17:41:42.026261       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 17:41:42.026295       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 17:41:42.026308       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 17:41:42.026410       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 17:41:42.027548       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 17:41:42.033558       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	W1018 17:41:42.048863       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1018 17:41:42.050261       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 17:41:42.067717       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1018 17:41:42.072232       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1018 17:41:42.729546       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1018 17:41:43.284542       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1018 17:41:45.808842       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 17:41:54.269828       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 17:41:54.405180       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 17:41:54.473862       1 controller.go:667] quota admission added evaluator for: deployments.apps
	W1018 17:42:03.284458       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	
	
	==> kube-apiserver [933870b5e943415b7ecac6fd98f8537b5e0e42b86569b4b7d319eff44a3da010] <==
	I1018 17:40:52.195862       1 server.go:150] Version: v1.34.1
	I1018 17:40:52.195974       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1018 17:40:52.812771       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1018 17:40:52.812808       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1018 17:40:52.812818       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1018 17:40:52.812823       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1018 17:40:52.812828       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1018 17:40:52.812832       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1018 17:40:52.812840       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1018 17:40:52.812844       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1018 17:40:52.812850       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1018 17:40:52.812854       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1018 17:40:52.812858       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1018 17:40:52.812862       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1018 17:40:52.829696       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 17:40:52.831179       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1018 17:40:52.831774       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1018 17:40:52.838589       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 17:40:52.845223       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1018 17:40:52.845250       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1018 17:40:52.845852       1 instance.go:239] Using reconciler: lease
	W1018 17:40:52.848887       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 17:41:12.829067       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 17:41:12.831182       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F1018 17:41:12.846964       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [1ffdfbb5e9622e4192714fed8bfa4ea7a73dcc053f130642d8e29a5c565ebea9] <==
	I1018 17:41:07.403597       1 serving.go:386] Generated self-signed cert in-memory
	I1018 17:41:08.625550       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1018 17:41:08.625581       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 17:41:08.627414       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1018 17:41:08.627750       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 17:41:08.627867       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 17:41:08.628008       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1018 17:41:23.855834       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-controller-manager [96f0fa2b71beaec136d643f232999f193a1e3a16d1ca723cfb31748694731abe] <==
	I1018 17:41:47.143192       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 17:41:47.146859       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 17:41:47.162191       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 17:41:47.167924       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 17:41:47.177964       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 17:41:47.178029       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 17:41:47.178094       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 17:41:47.178140       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 17:41:47.186626       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 17:41:47.187226       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 17:41:47.187330       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 17:41:47.187422       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800"
	I1018 17:41:47.187477       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800-m02"
	I1018 17:41:47.187509       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800-m03"
	I1018 17:41:47.187545       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800-m04"
	I1018 17:41:47.187570       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 17:41:47.188233       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 17:41:47.188405       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 17:41:47.187047       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 17:41:47.188792       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 17:41:47.189599       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 17:41:47.189657       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-181800-m04"
	I1018 17:41:47.193090       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 17:41:47.204060       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 17:42:37.382673       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="PartialDisruption"
	
	
	==> kube-proxy [2e4a1f13e11624e5f4250e6082edc23d03fdf1fc7644e45614e6cdfc5dd39e76] <==
	I1018 17:42:06.262094       1 server_linux.go:53] "Using iptables proxy"
	I1018 17:42:06.334558       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 17:42:06.434813       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 17:42:06.434860       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 17:42:06.434950       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 17:42:06.451883       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 17:42:06.451931       1 server_linux.go:132] "Using iptables Proxier"
	I1018 17:42:06.455099       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 17:42:06.455439       1 server.go:527] "Version info" version="v1.34.1"
	I1018 17:42:06.455461       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 17:42:06.457621       1 config.go:200] "Starting service config controller"
	I1018 17:42:06.457642       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 17:42:06.457661       1 config.go:106] "Starting endpoint slice config controller"
	I1018 17:42:06.457665       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 17:42:06.457677       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 17:42:06.457681       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 17:42:06.458386       1 config.go:309] "Starting node config controller"
	I1018 17:42:06.458405       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 17:42:06.458412       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 17:42:06.558355       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 17:42:06.558395       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 17:42:06.558458       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [6e9b6c2f0e69c56776af6be092e8313aef540b7319fd0664f3eb3f947353a66b] <==
	E1018 17:41:07.266841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 17:41:07.311343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 17:41:07.533447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 17:41:07.651007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 17:41:08.355495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 17:41:16.769551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 17:41:17.489724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 17:41:17.665056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 17:41:18.205960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 17:41:18.570146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 17:41:18.949283       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 17:41:21.873636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 17:41:21.969747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 17:41:22.140090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 17:41:23.503240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 17:41:24.328010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 17:41:25.411284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 17:41:25.991046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 17:41:26.048796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 17:41:27.484563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 17:41:28.014616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 17:41:28.168052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 17:41:29.601662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 17:41:31.989429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1018 17:42:01.134075       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.537384     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-p7nbg" podUID="9d361193-5b45-400e-8161-804fc30e7515"
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.541593     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-72mvm_kube-system(5edfc356-9d49-4895-b36a-06c2bd39155a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.541650     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-72mvm" podUID="5edfc356-9d49-4895-b36a-06c2bd39155a"
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.543446     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod busybox-7b57f96db7-fbwpv_default(58e37574-901f-46d4-bb33-2d0f7ae9c08c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.543484     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-7b57f96db7-fbwpv" podUID="58e37574-901f-46d4-bb33-2d0f7ae9c08c"
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.556129     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container storage-provisioner start failed in pod storage-provisioner_kube-system(3c6521cd-8e1b-46aa-96a3-39e475e1426c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.556318     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="3c6521cd-8e1b-46aa-96a3-39e475e1426c"
	Oct 18 17:41:54 ha-181800 kubelet[798]: W1018 17:41:54.573814     798 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/crio-578310fdfac473102b8772a2897f522e4e15e81fc4a884380a337b9e6d1aa5b2 WatchSource:0}: Error finding container 578310fdfac473102b8772a2897f522e4e15e81fc4a884380a337b9e6d1aa5b2: Status 404 returned error can't find the container with id 578310fdfac473102b8772a2897f522e4e15e81fc4a884380a337b9e6d1aa5b2
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.578568     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-f6v2w_kube-system(a1fbdf00-9636-43a5-b1ed-a98bcacb5537): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.578616     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-f6v2w" podUID="a1fbdf00-9636-43a5-b1ed-a98bcacb5537"
	Oct 18 17:41:55 ha-181800 kubelet[798]: I1018 17:41:55.114096     798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1a1eda2cde092be2eda0d8bef8f7ec3" path="/var/lib/kubelet/pods/a1a1eda2cde092be2eda0d8bef8f7ec3/volumes"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.433187     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-f6v2w_kube-system(a1fbdf00-9636-43a5-b1ed-a98bcacb5537): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.433245     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-f6v2w" podUID="a1fbdf00-9636-43a5-b1ed-a98bcacb5537"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.435023     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-p7nbg_kube-system(9d361193-5b45-400e-8161-804fc30e7515): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.435148     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-p7nbg" podUID="9d361193-5b45-400e-8161-804fc30e7515"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.441863     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod busybox-7b57f96db7-fbwpv_default(58e37574-901f-46d4-bb33-2d0f7ae9c08c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.441915     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-7b57f96db7-fbwpv" podUID="58e37574-901f-46d4-bb33-2d0f7ae9c08c"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.445392     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-proxy start failed in pod kube-proxy-stgvm_kube-system(15b89226-91ae-478f-acfe-7841776b1377): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.445443     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-stgvm" podUID="15b89226-91ae-478f-acfe-7841776b1377"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.450521     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-72mvm_kube-system(5edfc356-9d49-4895-b36a-06c2bd39155a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.450564     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-72mvm" podUID="5edfc356-9d49-4895-b36a-06c2bd39155a"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.458132     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container storage-provisioner start failed in pod storage-provisioner_kube-system(3c6521cd-8e1b-46aa-96a3-39e475e1426c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.458255     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="3c6521cd-8e1b-46aa-96a3-39e475e1426c"
	Oct 18 17:42:53 ha-181800 kubelet[798]: E1018 17:42:53.045182     798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f59bab0e4fe86f69836eb694e5d31105cca80fd917445482f23b6d46da571384\": container with ID starting with f59bab0e4fe86f69836eb694e5d31105cca80fd917445482f23b6d46da571384 not found: ID does not exist" containerID="f59bab0e4fe86f69836eb694e5d31105cca80fd917445482f23b6d46da571384"
	Oct 18 17:42:53 ha-181800 kubelet[798]: I1018 17:42:53.045240     798 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="f59bab0e4fe86f69836eb694e5d31105cca80fd917445482f23b6d46da571384" err="rpc error: code = NotFound desc = could not find container \"f59bab0e4fe86f69836eb694e5d31105cca80fd917445482f23b6d46da571384\": container with ID starting with f59bab0e4fe86f69836eb694e5d31105cca80fd917445482f23b6d46da571384 not found: ID does not exist"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-181800 -n ha-181800
helpers_test.go:269: (dbg) Run:  kubectl --context ha-181800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (504.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (5.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-181800 node delete m03 --alsologtostderr -v 5: exit status 83 (193.655852ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-181800-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-181800"

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 17:47:43.803857   67811 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:47:43.804073   67811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:47:43.804106   67811 out.go:374] Setting ErrFile to fd 2...
	I1018 17:47:43.804126   67811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:47:43.804385   67811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:47:43.804689   67811 mustload.go:65] Loading cluster: ha-181800
	I1018 17:47:43.805172   67811 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:47:43.805682   67811 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:47:43.833138   67811 host.go:66] Checking if "ha-181800" exists ...
	I1018 17:47:43.833493   67811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:47:43.891786   67811 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-18 17:47:43.881931411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:47:43.892192   67811 cli_runner.go:164] Run: docker container inspect ha-181800-m02 --format={{.State.Status}}
	I1018 17:47:43.914377   67811 host.go:66] Checking if "ha-181800-m02" exists ...
	I1018 17:47:43.914905   67811 cli_runner.go:164] Run: docker container inspect ha-181800-m03 --format={{.State.Status}}
	I1018 17:47:43.936799   67811 out.go:179] * The control-plane node ha-181800-m03 host is not running: state=Stopped
	I1018 17:47:43.939719   67811 out.go:179]   To start a cluster, run: "minikube start -p ha-181800"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-linux-arm64 -p ha-181800 node delete m03 --alsologtostderr -v 5": exit status 83
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5: exit status 7 (607.892903ms)

                                                
                                                
-- stdout --
	ha-181800
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181800-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-181800-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-181800-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 17:47:44.008541   67865 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:47:44.008675   67865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:47:44.008686   67865 out.go:374] Setting ErrFile to fd 2...
	I1018 17:47:44.008691   67865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:47:44.009206   67865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:47:44.009491   67865 out.go:368] Setting JSON to false
	I1018 17:47:44.009538   67865 mustload.go:65] Loading cluster: ha-181800
	I1018 17:47:44.009604   67865 notify.go:220] Checking for updates...
	I1018 17:47:44.011510   67865 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:47:44.011540   67865 status.go:174] checking status of ha-181800 ...
	I1018 17:47:44.012098   67865 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:47:44.033111   67865 status.go:371] ha-181800 host status = "Running" (err=<nil>)
	I1018 17:47:44.033140   67865 host.go:66] Checking if "ha-181800" exists ...
	I1018 17:47:44.033456   67865 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:47:44.058617   67865 host.go:66] Checking if "ha-181800" exists ...
	I1018 17:47:44.058925   67865 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:47:44.058983   67865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:47:44.078512   67865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:47:44.186491   67865 ssh_runner.go:195] Run: systemctl --version
	I1018 17:47:44.194251   67865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 17:47:44.207260   67865 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:47:44.290006   67865 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-18 17:47:44.279959189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:47:44.290553   67865 kubeconfig.go:125] found "ha-181800" server: "https://192.168.49.254:8443"
	I1018 17:47:44.290583   67865 api_server.go:166] Checking apiserver status ...
	I1018 17:47:44.290625   67865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:44.304155   67865 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1095/cgroup
	I1018 17:47:44.313551   67865 api_server.go:182] apiserver freezer: "9:freezer:/docker/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/crio/crio-3c32a11f94c333ae590b8745e77ffbb92367453ca4e6aee44e0e906b14390ca9"
	I1018 17:47:44.313631   67865 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/crio/crio-3c32a11f94c333ae590b8745e77ffbb92367453ca4e6aee44e0e906b14390ca9/freezer.state
	I1018 17:47:44.321361   67865 api_server.go:204] freezer state: "THAWED"
	I1018 17:47:44.321404   67865 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1018 17:47:44.329725   67865 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1018 17:47:44.329765   67865 status.go:463] ha-181800 apiserver status = Running (err=<nil>)
	I1018 17:47:44.329777   67865 status.go:176] ha-181800 status: &{Name:ha-181800 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 17:47:44.329798   67865 status.go:174] checking status of ha-181800-m02 ...
	I1018 17:47:44.330119   67865 cli_runner.go:164] Run: docker container inspect ha-181800-m02 --format={{.State.Status}}
	I1018 17:47:44.348094   67865 status.go:371] ha-181800-m02 host status = "Running" (err=<nil>)
	I1018 17:47:44.348118   67865 host.go:66] Checking if "ha-181800-m02" exists ...
	I1018 17:47:44.348420   67865 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:47:44.369013   67865 host.go:66] Checking if "ha-181800-m02" exists ...
	I1018 17:47:44.369341   67865 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:47:44.369395   67865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:47:44.386434   67865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:47:44.486135   67865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 17:47:44.500437   67865 kubeconfig.go:125] found "ha-181800" server: "https://192.168.49.254:8443"
	I1018 17:47:44.500514   67865 api_server.go:166] Checking apiserver status ...
	I1018 17:47:44.500592   67865 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1018 17:47:44.511897   67865 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:47:44.511975   67865 status.go:463] ha-181800-m02 apiserver status = Running (err=<nil>)
	I1018 17:47:44.512017   67865 status.go:176] ha-181800-m02 status: &{Name:ha-181800-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 17:47:44.512077   67865 status.go:174] checking status of ha-181800-m03 ...
	I1018 17:47:44.512445   67865 cli_runner.go:164] Run: docker container inspect ha-181800-m03 --format={{.State.Status}}
	I1018 17:47:44.529856   67865 status.go:371] ha-181800-m03 host status = "Stopped" (err=<nil>)
	I1018 17:47:44.529892   67865 status.go:384] host is not running, skipping remaining checks
	I1018 17:47:44.529902   67865 status.go:176] ha-181800-m03 status: &{Name:ha-181800-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 17:47:44.529929   67865 status.go:174] checking status of ha-181800-m04 ...
	I1018 17:47:44.530214   67865 cli_runner.go:164] Run: docker container inspect ha-181800-m04 --format={{.State.Status}}
	I1018 17:47:44.548229   67865 status.go:371] ha-181800-m04 host status = "Stopped" (err=<nil>)
	I1018 17:47:44.548249   67865 status.go:384] host is not running, skipping remaining checks
	I1018 17:47:44.548255   67865 status.go:176] ha-181800-m04 status: &{Name:ha-181800-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-181800
helpers_test.go:243: (dbg) docker inspect ha-181800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2",
	        "Created": "2025-10-18T17:32:56.632116312Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 51376,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T17:39:46.245999615Z",
	            "FinishedAt": "2025-10-18T17:39:45.630064495Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/hostname",
	        "HostsPath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/hosts",
	        "LogPath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2-json.log",
	        "Name": "/ha-181800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-181800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-181800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2",
	                "LowerDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-181800",
	                "Source": "/var/lib/docker/volumes/ha-181800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-181800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-181800",
	                "name.minikube.sigs.k8s.io": "ha-181800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "efaac0f11b270c145ecb6a49cdddbc0cc50de47d14ed81303acfb3d93ecaef30",
	            "SandboxKey": "/var/run/docker/netns/efaac0f11b27",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32808"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32809"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32812"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32810"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32811"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-181800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:ba:f8:3c:6b:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "903568cdf824d38f52cb9a58c116a852c83eb599cf8cc87e25ba21b593e45142",
	                    "EndpointID": "af9b438a40e91de308acdf0827c862a018060c99dd48a4f5e67a2e361be9d341",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-181800",
	                        "5743bf3218eb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-181800 -n ha-181800
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-181800 logs -n 25: (2.35246747s)
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-181800 ssh -n ha-181800-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m02 sudo cat /home/docker/cp-test_ha-181800-m03_ha-181800-m02.txt                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m03:/home/docker/cp-test.txt ha-181800-m04:/home/docker/cp-test_ha-181800-m03_ha-181800-m04.txt               │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test_ha-181800-m03_ha-181800-m04.txt                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp testdata/cp-test.txt ha-181800-m04:/home/docker/cp-test.txt                                                             │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1463328482/001/cp-test_ha-181800-m04.txt │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt ha-181800:/home/docker/cp-test_ha-181800-m04_ha-181800.txt                       │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800 sudo cat /home/docker/cp-test_ha-181800-m04_ha-181800.txt                                                 │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt ha-181800-m02:/home/docker/cp-test_ha-181800-m04_ha-181800-m02.txt               │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m02 sudo cat /home/docker/cp-test_ha-181800-m04_ha-181800-m02.txt                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt ha-181800-m03:/home/docker/cp-test_ha-181800-m04_ha-181800-m03.txt               │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m03 sudo cat /home/docker/cp-test_ha-181800-m04_ha-181800-m03.txt                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ node    │ ha-181800 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ node    │ ha-181800 node start m02 --alsologtostderr -v 5                                                                                      │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:39 UTC │
	│ node    │ ha-181800 node list --alsologtostderr -v 5                                                                                           │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:39 UTC │                     │
	│ stop    │ ha-181800 stop --alsologtostderr -v 5                                                                                                │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:39 UTC │ 18 Oct 25 17:39 UTC │
	│ start   │ ha-181800 start --wait true --alsologtostderr -v 5                                                                                   │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:39 UTC │                     │
	│ node    │ ha-181800 node list --alsologtostderr -v 5                                                                                           │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:47 UTC │                     │
	│ node    │ ha-181800 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:47 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 17:39:45
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 17:39:45.975281   51251 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:39:45.975504   51251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:39:45.975531   51251 out.go:374] Setting ErrFile to fd 2...
	I1018 17:39:45.975549   51251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:39:45.975846   51251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:39:45.976262   51251 out.go:368] Setting JSON to false
	I1018 17:39:45.977169   51251 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4935,"bootTime":1760804251,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 17:39:45.977269   51251 start.go:141] virtualization:  
	I1018 17:39:45.980610   51251 out.go:179] * [ha-181800] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 17:39:45.984311   51251 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 17:39:45.984374   51251 notify.go:220] Checking for updates...
	I1018 17:39:45.990274   51251 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 17:39:45.993215   51251 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:39:45.996106   51251 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 17:39:45.999014   51251 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 17:39:46.004420   51251 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 17:39:46.008306   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:39:46.008436   51251 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 17:39:46.042019   51251 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 17:39:46.042131   51251 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:39:46.099091   51251 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 17:39:46.089556228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:39:46.099210   51251 docker.go:318] overlay module found
	I1018 17:39:46.102259   51251 out.go:179] * Using the docker driver based on existing profile
	I1018 17:39:46.105078   51251 start.go:305] selected driver: docker
	I1018 17:39:46.105099   51251 start.go:925] validating driver "docker" against &{Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:39:46.105237   51251 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 17:39:46.105338   51251 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:39:46.159602   51251 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 17:39:46.150874009 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:39:46.159982   51251 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:39:46.160020   51251 cni.go:84] Creating CNI manager for ""
	I1018 17:39:46.160080   51251 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1018 17:39:46.160126   51251 start.go:349] cluster config:
	{Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:39:46.165176   51251 out.go:179] * Starting "ha-181800" primary control-plane node in "ha-181800" cluster
	I1018 17:39:46.168051   51251 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:39:46.170939   51251 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:39:46.173836   51251 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:39:46.173896   51251 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 17:39:46.173911   51251 cache.go:58] Caching tarball of preloaded images
	I1018 17:39:46.173925   51251 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:39:46.173990   51251 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:39:46.174000   51251 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:39:46.174155   51251 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:39:46.192746   51251 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:39:46.192769   51251 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:39:46.192782   51251 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:39:46.192803   51251 start.go:360] acquireMachinesLock for ha-181800: {Name:mk3f5dfba2ab7d01f94f924dfcc5edab5f076901 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:39:46.192864   51251 start.go:364] duration metric: took 36.243µs to acquireMachinesLock for "ha-181800"
	I1018 17:39:46.192888   51251 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:39:46.192896   51251 fix.go:54] fixHost starting: 
	I1018 17:39:46.193211   51251 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:39:46.209470   51251 fix.go:112] recreateIfNeeded on ha-181800: state=Stopped err=<nil>
	W1018 17:39:46.209498   51251 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:39:46.212825   51251 out.go:252] * Restarting existing docker container for "ha-181800" ...
	I1018 17:39:46.212900   51251 cli_runner.go:164] Run: docker start ha-181800
	I1018 17:39:46.480673   51251 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:39:46.500591   51251 kic.go:430] container "ha-181800" state is running.
	I1018 17:39:46.501011   51251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:39:46.526396   51251 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:39:46.526638   51251 machine.go:93] provisionDockerMachine start ...
	I1018 17:39:46.526707   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:46.546472   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:46.546909   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1018 17:39:46.546927   51251 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:39:46.547526   51251 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 17:39:49.696893   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800
	
	I1018 17:39:49.696925   51251 ubuntu.go:182] provisioning hostname "ha-181800"
	I1018 17:39:49.697031   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:49.714524   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:49.714832   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1018 17:39:49.714849   51251 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181800 && echo "ha-181800" | sudo tee /etc/hostname
	I1018 17:39:49.873528   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800
	
	I1018 17:39:49.873612   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:49.891188   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:49.891504   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1018 17:39:49.891521   51251 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181800/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:39:50.037199   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:39:50.037228   51251 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:39:50.037247   51251 ubuntu.go:190] setting up certificates
	I1018 17:39:50.037257   51251 provision.go:84] configureAuth start
	I1018 17:39:50.037320   51251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:39:50.055129   51251 provision.go:143] copyHostCerts
	I1018 17:39:50.055181   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:39:50.055213   51251 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:39:50.055234   51251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:39:50.055314   51251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:39:50.055408   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:39:50.055430   51251 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:39:50.055438   51251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:39:50.055466   51251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:39:50.055525   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:39:50.055546   51251 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:39:50.055555   51251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:39:50.055581   51251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:39:50.055647   51251 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.ha-181800 san=[127.0.0.1 192.168.49.2 ha-181800 localhost minikube]
	I1018 17:39:50.382522   51251 provision.go:177] copyRemoteCerts
	I1018 17:39:50.382593   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:39:50.382633   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:50.403959   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:39:50.508789   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 17:39:50.508850   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:39:50.526450   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 17:39:50.526538   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1018 17:39:50.544187   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 17:39:50.544274   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:39:50.561987   51251 provision.go:87] duration metric: took 524.706666ms to configureAuth
	I1018 17:39:50.562063   51251 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:39:50.562317   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:39:50.562424   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:50.578939   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:50.579244   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1018 17:39:50.579264   51251 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:39:50.937128   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:39:50.937197   51251 machine.go:96] duration metric: took 4.410541s to provisionDockerMachine
	I1018 17:39:50.937222   51251 start.go:293] postStartSetup for "ha-181800" (driver="docker")
	I1018 17:39:50.937247   51251 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:39:50.937359   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:39:50.937444   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:50.959339   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:39:51.065300   51251 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:39:51.068761   51251 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:39:51.068792   51251 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:39:51.068803   51251 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:39:51.068858   51251 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:39:51.068963   51251 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:39:51.068976   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 17:39:51.069076   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 17:39:51.076928   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:39:51.094473   51251 start.go:296] duration metric: took 157.222631ms for postStartSetup
	I1018 17:39:51.094579   51251 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:39:51.094625   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:51.113220   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:39:51.213567   51251 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:39:51.218175   51251 fix.go:56] duration metric: took 5.025272015s for fixHost
	I1018 17:39:51.218200   51251 start.go:83] releasing machines lock for "ha-181800", held for 5.025323101s
	I1018 17:39:51.218283   51251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:39:51.235815   51251 ssh_runner.go:195] Run: cat /version.json
	I1018 17:39:51.235850   51251 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:39:51.235866   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:51.235904   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:51.261163   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:39:51.270603   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:39:51.360468   51251 ssh_runner.go:195] Run: systemctl --version
	I1018 17:39:51.454722   51251 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:39:51.498840   51251 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:39:51.503695   51251 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:39:51.503796   51251 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:39:51.511526   51251 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:39:51.511549   51251 start.go:495] detecting cgroup driver to use...
	I1018 17:39:51.511578   51251 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:39:51.511630   51251 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:39:51.526599   51251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:39:51.539484   51251 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:39:51.539576   51251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:39:51.554963   51251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:39:51.568183   51251 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:39:51.676636   51251 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:39:51.792230   51251 docker.go:234] disabling docker service ...
	I1018 17:39:51.792306   51251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:39:51.806847   51251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:39:51.819137   51251 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:39:51.938883   51251 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:39:52.058796   51251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:39:52.072487   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:39:52.088092   51251 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:39:52.088205   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.097568   51251 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:39:52.097729   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.107431   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.116597   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.125822   51251 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:39:52.134598   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.143667   51251 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.151898   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.160172   51251 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:39:52.167407   51251 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:39:52.174657   51251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:39:52.287403   51251 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:39:52.421729   51251 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:39:52.421850   51251 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:39:52.425707   51251 start.go:563] Will wait 60s for crictl version
	I1018 17:39:52.425813   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:39:52.429420   51251 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:39:52.453867   51251 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:39:52.453974   51251 ssh_runner.go:195] Run: crio --version
	I1018 17:39:52.486777   51251 ssh_runner.go:195] Run: crio --version
	I1018 17:39:52.520354   51251 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:39:52.523389   51251 cli_runner.go:164] Run: docker network inspect ha-181800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:39:52.539892   51251 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:39:52.543780   51251 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:39:52.553416   51251 kubeadm.go:883] updating cluster {Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 17:39:52.553576   51251 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:39:52.553634   51251 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 17:39:52.588251   51251 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 17:39:52.588276   51251 crio.go:433] Images already preloaded, skipping extraction
	I1018 17:39:52.588335   51251 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 17:39:52.613957   51251 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 17:39:52.613979   51251 cache_images.go:85] Images are preloaded, skipping loading
	I1018 17:39:52.613989   51251 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 17:39:52.614102   51251 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-181800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:39:52.614189   51251 ssh_runner.go:195] Run: crio config
	I1018 17:39:52.670252   51251 cni.go:84] Creating CNI manager for ""
	I1018 17:39:52.670275   51251 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1018 17:39:52.670294   51251 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 17:39:52.670319   51251 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-181800 NodeName:ha-181800 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 17:39:52.670455   51251 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-181800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 17:39:52.670475   51251 kube-vip.go:115] generating kube-vip config ...
	I1018 17:39:52.670529   51251 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 17:39:52.682279   51251 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:39:52.682377   51251 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 17:39:52.682436   51251 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:39:52.689950   51251 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:39:52.690041   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1018 17:39:52.697809   51251 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1018 17:39:52.710709   51251 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:39:52.723367   51251 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1018 17:39:52.735890   51251 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 17:39:52.748648   51251 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 17:39:52.752220   51251 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:39:52.762098   51251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:39:52.871320   51251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:39:52.886583   51251 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800 for IP: 192.168.49.2
	I1018 17:39:52.886603   51251 certs.go:195] generating shared ca certs ...
	I1018 17:39:52.886618   51251 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:39:52.886785   51251 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:39:52.886838   51251 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:39:52.886849   51251 certs.go:257] generating profile certs ...
	I1018 17:39:52.886923   51251 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key
	I1018 17:39:52.886953   51251 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.46a58690
	I1018 17:39:52.886970   51251 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt.46a58690 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1018 17:39:53.268315   51251 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt.46a58690 ...
	I1018 17:39:53.268348   51251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt.46a58690: {Name:mk0cc861493b9d286eed0bfb736b15e28a1706f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:39:53.268572   51251 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.46a58690 ...
	I1018 17:39:53.268589   51251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.46a58690: {Name:mk424cb4f615a1903e846801cb9cb2e734afdfb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:39:53.268677   51251 certs.go:382] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt.46a58690 -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt
	I1018 17:39:53.268822   51251 certs.go:386] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.46a58690 -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key
	I1018 17:39:53.268969   51251 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key
	I1018 17:39:53.268988   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 17:39:53.269005   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 17:39:53.269023   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 17:39:53.269043   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 17:39:53.269070   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 17:39:53.269094   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 17:39:53.269112   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 17:39:53.269123   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 17:39:53.269179   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:39:53.269213   51251 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:39:53.269225   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:39:53.269249   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:39:53.269273   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:39:53.269299   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:39:53.269346   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:39:53.269376   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 17:39:53.269392   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:39:53.269403   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 17:39:53.269946   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:39:53.289258   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:39:53.307330   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:39:53.325012   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:39:53.342168   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 17:39:53.359559   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 17:39:53.376235   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 17:39:53.393388   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 17:39:53.409944   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:39:53.427591   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:39:53.443532   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:39:53.459786   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 17:39:53.472627   51251 ssh_runner.go:195] Run: openssl version
	I1018 17:39:53.478997   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:39:53.486807   51251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:39:53.490229   51251 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:39:53.490289   51251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:39:53.534916   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:39:53.547040   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:39:53.561930   51251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:39:53.567602   51251 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:39:53.567707   51251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:39:53.617018   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:39:53.628559   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:39:53.641445   51251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:39:53.645568   51251 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:39:53.645680   51251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:39:53.715014   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:39:53.744004   51251 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:39:53.751940   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 17:39:53.829686   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 17:39:53.890601   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 17:39:53.957371   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 17:39:54.017003   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 17:39:54.064655   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 17:39:54.111921   51251 kubeadm.go:400] StartCluster: {Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:39:54.112099   51251 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:39:54.112174   51251 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:39:54.163162   51251 cri.go:89] found id: "dda012a63c45a5c37a124da696c59f0ac82f51c6728ee30f5a6b3a9df6f28b54"
	I1018 17:39:54.163230   51251 cri.go:89] found id: "ac8ef32697a356e273cd1b84ce23b6e628c802ef7b211f001fc50bb472635814"
	I1018 17:39:54.163250   51251 cri.go:89] found id: "4957aae3df6cdc996ba2129d1f43210ebdec1c480e6db0115ee34f32691af151"
	I1018 17:39:54.163265   51251 cri.go:89] found id: "6e9b6c2f0e69c56776af6be092e8313aef540b7319fd0664f3eb3f947353a66b"
	I1018 17:39:54.163282   51251 cri.go:89] found id: "a0776ff98d8411ec5ae52a11de472cb17e1d8c764d642bf18a22aec8b44a08ee"
	I1018 17:39:54.163300   51251 cri.go:89] found id: ""
	I1018 17:39:54.163370   51251 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 17:39:54.178952   51251 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:39:54Z" level=error msg="open /run/runc: no such file or directory"
	I1018 17:39:54.179088   51251 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 17:39:54.202035   51251 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 17:39:54.202104   51251 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 17:39:54.202180   51251 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 17:39:54.218306   51251 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:39:54.218743   51251 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-181800" does not appear in /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:39:54.218882   51251 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-2509/kubeconfig needs updating (will repair): [kubeconfig missing "ha-181800" cluster setting kubeconfig missing "ha-181800" context setting]
	I1018 17:39:54.219252   51251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:39:54.219794   51251 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 17:39:54.220519   51251 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 17:39:54.220606   51251 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 17:39:54.220635   51251 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 17:39:54.220585   51251 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1018 17:39:54.220726   51251 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 17:39:54.220753   51251 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 17:39:54.221075   51251 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 17:39:54.234375   51251 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1018 17:39:54.234436   51251 kubeadm.go:601] duration metric: took 32.30335ms to restartPrimaryControlPlane
	I1018 17:39:54.234460   51251 kubeadm.go:402] duration metric: took 122.54698ms to StartCluster
	I1018 17:39:54.234487   51251 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:39:54.234565   51251 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:39:54.235140   51251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:39:54.235365   51251 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:39:54.235417   51251 start.go:241] waiting for startup goroutines ...
	I1018 17:39:54.235446   51251 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 17:39:54.235957   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:39:54.241374   51251 out.go:179] * Enabled addons: 
	I1018 17:39:54.244317   51251 addons.go:514] duration metric: took 8.873213ms for enable addons: enabled=[]
	I1018 17:39:54.244381   51251 start.go:246] waiting for cluster config update ...
	I1018 17:39:54.244403   51251 start.go:255] writing updated cluster config ...
	I1018 17:39:54.247646   51251 out.go:203] 
	I1018 17:39:54.250620   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:39:54.250787   51251 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:39:54.254182   51251 out.go:179] * Starting "ha-181800-m02" control-plane node in "ha-181800" cluster
	I1018 17:39:54.257073   51251 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:39:54.259992   51251 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:39:54.262894   51251 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:39:54.262941   51251 cache.go:58] Caching tarball of preloaded images
	I1018 17:39:54.263061   51251 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:39:54.263094   51251 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:39:54.263229   51251 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:39:54.263458   51251 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:39:54.291252   51251 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:39:54.291269   51251 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:39:54.291282   51251 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:39:54.291303   51251 start.go:360] acquireMachinesLock for ha-181800-m02: {Name:mk36a488c0fbfc8557c6ba291b969aad85b45635 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:39:54.291352   51251 start.go:364] duration metric: took 33.977µs to acquireMachinesLock for "ha-181800-m02"
	I1018 17:39:54.291370   51251 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:39:54.291375   51251 fix.go:54] fixHost starting: m02
	I1018 17:39:54.291629   51251 cli_runner.go:164] Run: docker container inspect ha-181800-m02 --format={{.State.Status}}
	I1018 17:39:54.318512   51251 fix.go:112] recreateIfNeeded on ha-181800-m02: state=Stopped err=<nil>
	W1018 17:39:54.318536   51251 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:39:54.321781   51251 out.go:252] * Restarting existing docker container for "ha-181800-m02" ...
	I1018 17:39:54.321859   51251 cli_runner.go:164] Run: docker start ha-181800-m02
	I1018 17:39:54.692758   51251 cli_runner.go:164] Run: docker container inspect ha-181800-m02 --format={{.State.Status}}
	I1018 17:39:54.723920   51251 kic.go:430] container "ha-181800-m02" state is running.
	I1018 17:39:54.724263   51251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:39:54.749215   51251 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:39:54.749467   51251 machine.go:93] provisionDockerMachine start ...
	I1018 17:39:54.749523   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:39:54.781536   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:54.781830   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1018 17:39:54.781839   51251 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:39:54.782427   51251 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39794->127.0.0.1:32813: read: connection reset by peer
	I1018 17:39:58.082162   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m02
	
	I1018 17:39:58.082184   51251 ubuntu.go:182] provisioning hostname "ha-181800-m02"
	I1018 17:39:58.082261   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:39:58.126530   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:58.126844   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1018 17:39:58.126855   51251 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181800-m02 && echo "ha-181800-m02" | sudo tee /etc/hostname
	I1018 17:39:58.443573   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m02
	
	I1018 17:39:58.443690   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:39:58.478907   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:58.479213   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1018 17:39:58.479243   51251 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181800-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181800-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181800-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:39:58.737653   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:39:58.737680   51251 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:39:58.737725   51251 ubuntu.go:190] setting up certificates
	I1018 17:39:58.737736   51251 provision.go:84] configureAuth start
	I1018 17:39:58.737818   51251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:39:58.774675   51251 provision.go:143] copyHostCerts
	I1018 17:39:58.774718   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:39:58.774757   51251 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:39:58.774769   51251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:39:58.774848   51251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:39:58.774946   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:39:58.774970   51251 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:39:58.774977   51251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:39:58.775018   51251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:39:58.775074   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:39:58.775100   51251 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:39:58.775109   51251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:39:58.775135   51251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:39:58.775197   51251 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.ha-181800-m02 san=[127.0.0.1 192.168.49.3 ha-181800-m02 localhost minikube]
	I1018 17:39:59.196567   51251 provision.go:177] copyRemoteCerts
	I1018 17:39:59.197114   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:39:59.197174   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:39:59.222600   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:39:59.394297   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 17:39:59.394389   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:39:59.450203   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 17:39:59.450288   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 17:39:59.513512   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 17:39:59.513624   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:39:59.573995   51251 provision.go:87] duration metric: took 836.238905ms to configureAuth
	I1018 17:39:59.574021   51251 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:39:59.574290   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:39:59.574415   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:39:59.606597   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:59.606908   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1018 17:39:59.606927   51251 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:40:00.196427   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:40:00.196520   51251 machine.go:96] duration metric: took 5.447042221s to provisionDockerMachine
	I1018 17:40:00.196547   51251 start.go:293] postStartSetup for "ha-181800-m02" (driver="docker")
	I1018 17:40:00.196572   51251 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:40:00.196694   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:40:00.196782   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:40:00.238873   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:40:00.392500   51251 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:40:00.403930   51251 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:40:00.403959   51251 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:40:00.403971   51251 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:40:00.404043   51251 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:40:00.404125   51251 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:40:00.404133   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 17:40:00.404244   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 17:40:00.423321   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:40:00.459796   51251 start.go:296] duration metric: took 263.21852ms for postStartSetup
	I1018 17:40:00.459966   51251 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:40:00.460049   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:40:00.503330   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:40:00.631049   51251 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:40:00.645680   51251 fix.go:56] duration metric: took 6.354295561s for fixHost
	I1018 17:40:00.645709   51251 start.go:83] releasing machines lock for "ha-181800-m02", held for 6.35434937s
	I1018 17:40:00.645791   51251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:40:00.682830   51251 out.go:179] * Found network options:
	I1018 17:40:00.685894   51251 out.go:179]   - NO_PROXY=192.168.49.2
	W1018 17:40:00.688804   51251 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:40:00.688858   51251 proxy.go:120] fail to check proxy env: Error ip not in block
	I1018 17:40:00.688930   51251 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:40:00.689085   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:40:00.689351   51251 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:40:00.689409   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:40:00.730142   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:40:00.730174   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:40:01.294197   51251 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:40:01.312592   51251 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:40:01.312744   51251 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:40:01.330228   51251 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:40:01.330302   51251 start.go:495] detecting cgroup driver to use...
	I1018 17:40:01.330348   51251 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:40:01.330425   51251 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:40:01.357073   51251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:40:01.416356   51251 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:40:01.416475   51251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:40:01.453551   51251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:40:01.481435   51251 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:40:01.742441   51251 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:40:01.978817   51251 docker.go:234] disabling docker service ...
	I1018 17:40:01.978936   51251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:40:02.001514   51251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:40:02.021678   51251 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:40:02.249968   51251 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:40:02.480556   51251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:40:02.498908   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:40:02.526424   51251 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:40:02.526493   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.542071   51251 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:40:02.542141   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.559770   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.574006   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.589455   51251 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:40:02.598587   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.612076   51251 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.624069   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.637136   51251 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:40:02.652415   51251 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:40:02.662181   51251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:40:02.863894   51251 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:41:33.166156   51251 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.302227656s)
	I1018 17:41:33.166194   51251 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:41:33.166252   51251 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:41:33.170771   51251 start.go:563] Will wait 60s for crictl version
	I1018 17:41:33.170830   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:41:33.176098   51251 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:41:33.213255   51251 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:41:33.213351   51251 ssh_runner.go:195] Run: crio --version
	I1018 17:41:33.258540   51251 ssh_runner.go:195] Run: crio --version
	I1018 17:41:33.296286   51251 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:41:33.299353   51251 out.go:179]   - env NO_PROXY=192.168.49.2
	I1018 17:41:33.302220   51251 cli_runner.go:164] Run: docker network inspect ha-181800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:41:33.319775   51251 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:41:33.324290   51251 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:41:33.336317   51251 mustload.go:65] Loading cluster: ha-181800
	I1018 17:41:33.336557   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:41:33.336817   51251 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:41:33.362604   51251 host.go:66] Checking if "ha-181800" exists ...
	I1018 17:41:33.362892   51251 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800 for IP: 192.168.49.3
	I1018 17:41:33.362901   51251 certs.go:195] generating shared ca certs ...
	I1018 17:41:33.362915   51251 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:41:33.363034   51251 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:41:33.363081   51251 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:41:33.363088   51251 certs.go:257] generating profile certs ...
	I1018 17:41:33.363157   51251 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key
	I1018 17:41:33.363222   51251 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.887e0b27
	I1018 17:41:33.363266   51251 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key
	I1018 17:41:33.363274   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 17:41:33.363286   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 17:41:33.363296   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 17:41:33.363306   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 17:41:33.363316   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 17:41:33.363328   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 17:41:33.363338   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 17:41:33.363348   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 17:41:33.363398   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:41:33.363424   51251 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:41:33.363433   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:41:33.363455   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:41:33.363476   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:41:33.363496   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:41:33.363536   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:41:33.363565   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:41:33.363579   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 17:41:33.363590   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 17:41:33.363643   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:41:33.388336   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:41:33.489250   51251 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1018 17:41:33.493494   51251 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1018 17:41:33.511835   51251 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1018 17:41:33.515898   51251 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1018 17:41:33.524188   51251 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1018 17:41:33.527936   51251 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1018 17:41:33.536545   51251 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1018 17:41:33.540347   51251 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1018 17:41:33.549002   51251 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1018 17:41:33.552698   51251 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1018 17:41:33.561692   51251 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1018 17:41:33.565522   51251 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1018 17:41:33.574471   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:41:33.598033   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:41:33.620604   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:41:33.644520   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:41:33.671246   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 17:41:33.694599   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 17:41:33.716649   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 17:41:33.739805   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 17:41:33.761744   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:41:33.784279   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:41:33.807665   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:41:33.831497   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1018 17:41:33.845903   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1018 17:41:33.860149   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1018 17:41:33.874010   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1018 17:41:33.893500   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1018 17:41:33.908151   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1018 17:41:33.922971   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1018 17:41:33.937486   51251 ssh_runner.go:195] Run: openssl version
	I1018 17:41:33.944301   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:41:33.953654   51251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:41:33.958036   51251 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:41:33.958171   51251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:41:34.004993   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:41:34.015337   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:41:34.024718   51251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:41:34.029508   51251 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:41:34.029667   51251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:41:34.076487   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:41:34.085949   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:41:34.095637   51251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:41:34.100153   51251 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:41:34.100269   51251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:41:34.148268   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:41:34.158037   51251 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:41:34.162480   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 17:41:34.206936   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 17:41:34.251076   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 17:41:34.294598   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 17:41:34.337252   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 17:41:34.379050   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 17:41:34.422861   51251 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1018 17:41:34.423031   51251 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-181800-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:41:34.423078   51251 kube-vip.go:115] generating kube-vip config ...
	I1018 17:41:34.423166   51251 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 17:41:34.435895   51251 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:41:34.435996   51251 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 17:41:34.436081   51251 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:41:34.444655   51251 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:41:34.444772   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1018 17:41:34.452743   51251 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 17:41:34.466348   51251 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:41:34.479899   51251 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 17:41:34.497063   51251 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 17:41:34.500892   51251 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:41:34.516267   51251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:41:34.674326   51251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:41:34.690850   51251 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:41:34.691288   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:41:34.696864   51251 out.go:179] * Verifying Kubernetes components...
	I1018 17:41:34.699590   51251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:41:34.858485   51251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:41:34.875760   51251 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1018 17:41:34.876060   51251 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1018 17:41:34.876378   51251 node_ready.go:35] waiting up to 6m0s for node "ha-181800-m02" to be "Ready" ...
	I1018 17:41:41.842514   51251 node_ready.go:49] node "ha-181800-m02" is "Ready"
	I1018 17:41:41.842547   51251 node_ready.go:38] duration metric: took 6.966151068s for node "ha-181800-m02" to be "Ready" ...
	I1018 17:41:41.842561   51251 api_server.go:52] waiting for apiserver process to appear ...
	I1018 17:41:41.842620   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:42.343686   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:42.843043   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:43.343313   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:43.843326   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:44.343648   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:44.843315   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:45.342911   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:45.842777   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:46.343420   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:46.843693   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:47.342746   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:47.843464   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:48.342878   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:48.843391   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:49.342759   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:49.843483   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:50.342789   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:50.842761   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:51.342785   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:51.843356   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:52.342785   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:52.843177   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:53.342698   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:53.842872   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:54.343544   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:54.842904   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:55.343425   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:55.843434   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:56.343297   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:56.843518   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:57.343357   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:57.842816   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:58.343642   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:58.842783   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:59.343043   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:59.843412   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:00.342951   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:00.843389   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:01.342774   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:01.842787   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:02.343236   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:02.842685   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:03.342751   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:03.843695   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:04.342729   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:04.843543   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:05.343721   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:05.843447   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:06.342743   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:06.842790   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:07.343656   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:07.843541   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:08.343267   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:08.843707   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:09.342771   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:09.843748   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:10.342856   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:10.842752   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:11.343307   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:11.842677   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:12.343443   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:12.843733   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:13.343641   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:13.842734   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:14.343649   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:14.842779   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:15.342756   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:15.842763   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:16.343741   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:16.842779   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:17.342825   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:17.843340   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:18.342759   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:18.842772   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:19.342755   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:19.842777   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:20.343137   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:20.843594   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:21.343397   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:21.843388   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:22.342798   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:22.843107   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:23.343587   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:23.842910   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:24.343458   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:24.843264   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:25.342775   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:25.842894   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:26.343732   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:26.842775   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:27.342787   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:27.842760   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:28.342772   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:28.843266   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:29.343220   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:29.843228   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:30.343087   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:30.842732   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:31.342878   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:31.843084   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:32.343181   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:32.843480   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:33.343320   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:33.842755   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:34.342929   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:34.842842   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:34.842930   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:34.869988   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:34.870010   51251 cri.go:89] found id: ""
	I1018 17:42:34.870018   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:34.870073   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:34.873710   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:34.873778   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:34.899173   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:34.899196   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:34.899202   51251 cri.go:89] found id: ""
	I1018 17:42:34.899209   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:34.899263   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:34.903214   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:34.906828   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:34.906903   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:34.933625   51251 cri.go:89] found id: ""
	I1018 17:42:34.933648   51251 logs.go:282] 0 containers: []
	W1018 17:42:34.933656   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:34.933663   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:34.933723   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:34.959655   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:34.959675   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:34.959680   51251 cri.go:89] found id: ""
	I1018 17:42:34.959688   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:34.959743   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:34.972509   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:34.977434   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:34.977506   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:35.014139   51251 cri.go:89] found id: ""
	I1018 17:42:35.014165   51251 logs.go:282] 0 containers: []
	W1018 17:42:35.014173   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:35.014180   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:35.014287   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:35.047968   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:35.047993   51251 cri.go:89] found id: ""
	I1018 17:42:35.048002   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:35.048056   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:35.052096   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:35.052159   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:35.087604   51251 cri.go:89] found id: ""
	I1018 17:42:35.087628   51251 logs.go:282] 0 containers: []
	W1018 17:42:35.087636   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:35.087645   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:35.087658   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:35.135319   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:35.135352   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:35.186498   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:35.186531   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:35.217338   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:35.217381   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:35.327154   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:35.327184   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:35.341645   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:35.341672   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:35.747254   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:35.739248    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.739909    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.741574    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.742106    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.743686    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:35.739248    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.739909    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.741574    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.742106    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.743686    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:35.747277   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:35.747291   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:35.784796   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:35.784825   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:35.811760   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:35.811786   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:35.886991   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:35.887025   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:35.921904   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:35.921933   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:38.449291   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:38.459790   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:38.459857   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:38.486350   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:38.486373   51251 cri.go:89] found id: ""
	I1018 17:42:38.486383   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:38.486444   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:38.490359   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:38.490430   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:38.518049   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:38.518073   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:38.518078   51251 cri.go:89] found id: ""
	I1018 17:42:38.518097   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:38.518156   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:38.522183   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:38.526138   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:38.526213   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:38.552857   51251 cri.go:89] found id: ""
	I1018 17:42:38.552881   51251 logs.go:282] 0 containers: []
	W1018 17:42:38.552890   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:38.552896   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:38.552996   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:38.581427   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:38.581447   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:38.581452   51251 cri.go:89] found id: ""
	I1018 17:42:38.581460   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:38.581516   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:38.585308   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:38.588834   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:38.588907   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:38.626035   51251 cri.go:89] found id: ""
	I1018 17:42:38.626060   51251 logs.go:282] 0 containers: []
	W1018 17:42:38.626068   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:38.626074   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:38.626180   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:38.654519   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:38.654541   51251 cri.go:89] found id: ""
	I1018 17:42:38.654549   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:38.654606   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:38.659468   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:38.659536   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:38.685688   51251 cri.go:89] found id: ""
	I1018 17:42:38.685717   51251 logs.go:282] 0 containers: []
	W1018 17:42:38.685726   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:38.685735   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:38.685747   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:38.783795   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:38.783829   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:38.826341   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:38.826373   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:38.860295   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:38.860328   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:38.914363   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:38.914398   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:38.945563   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:38.945589   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:38.986953   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:38.986976   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:39.069689   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:39.069729   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:39.111763   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:39.111827   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:39.125634   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:39.125711   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:39.199836   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:39.189569    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.190870    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.192604    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.193407    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.194944    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:39.189569    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.190870    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.192604    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.193407    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.194944    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:39.199901   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:39.199927   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:41.727280   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:41.737746   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:41.737830   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:41.764569   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:41.764587   51251 cri.go:89] found id: ""
	I1018 17:42:41.764595   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:41.764651   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:41.768619   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:41.768692   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:41.795219   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:41.795239   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:41.795244   51251 cri.go:89] found id: ""
	I1018 17:42:41.795251   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:41.795315   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:41.799045   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:41.802635   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:41.802708   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:41.829223   51251 cri.go:89] found id: ""
	I1018 17:42:41.829246   51251 logs.go:282] 0 containers: []
	W1018 17:42:41.829256   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:41.829262   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:41.829319   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:41.863591   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:41.863612   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:41.863617   51251 cri.go:89] found id: ""
	I1018 17:42:41.863625   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:41.863708   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:41.867633   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:41.871288   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:41.871365   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:41.907130   51251 cri.go:89] found id: ""
	I1018 17:42:41.907154   51251 logs.go:282] 0 containers: []
	W1018 17:42:41.907162   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:41.907179   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:41.907239   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:41.937193   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:41.937215   51251 cri.go:89] found id: ""
	I1018 17:42:41.937223   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:41.937281   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:41.941168   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:41.941244   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:41.993845   51251 cri.go:89] found id: ""
	I1018 17:42:41.993923   51251 logs.go:282] 0 containers: []
	W1018 17:42:41.993944   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:41.993955   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:41.993967   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:42.041265   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:42.041296   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:42.070875   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:42.070904   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:42.106610   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:42.106642   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:42.194367   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:42.194403   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:42.229250   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:42.229279   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:42.283222   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:42.283254   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:42.343661   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:42.343694   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:42.376582   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:42.376608   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:42.475562   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:42.475597   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:42.488812   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:42.488842   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:42.564172   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:42.556222    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.556691    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.558297    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.558653    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.560347    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:42.556222    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.556691    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.558297    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.558653    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.560347    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:45.065078   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:45.086837   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:45.086979   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:45.165006   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:45.165027   51251 cri.go:89] found id: ""
	I1018 17:42:45.165035   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:45.165103   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:45.172323   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:45.172423   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:45.217483   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:45.217515   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:45.217521   51251 cri.go:89] found id: ""
	I1018 17:42:45.217530   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:45.217596   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:45.223128   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:45.227931   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:45.228025   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:45.283738   51251 cri.go:89] found id: ""
	I1018 17:42:45.283769   51251 logs.go:282] 0 containers: []
	W1018 17:42:45.283789   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:45.283818   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:45.283897   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:45.321652   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:45.321679   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:45.321685   51251 cri.go:89] found id: ""
	I1018 17:42:45.321694   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:45.321760   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:45.332292   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:45.337760   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:45.338055   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:45.381645   51251 cri.go:89] found id: ""
	I1018 17:42:45.381666   51251 logs.go:282] 0 containers: []
	W1018 17:42:45.381675   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:45.381681   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:45.381740   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:45.413702   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:45.413726   51251 cri.go:89] found id: ""
	I1018 17:42:45.413735   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:45.413793   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:45.417551   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:45.417654   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:45.444154   51251 cri.go:89] found id: ""
	I1018 17:42:45.444178   51251 logs.go:282] 0 containers: []
	W1018 17:42:45.444186   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:45.444195   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:45.444206   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:45.537154   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:45.537189   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:45.618318   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:45.608985    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.610405    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.610978    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.612722    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.613098    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:45.608985    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.610405    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.610978    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.612722    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.613098    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:45.618339   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:45.618352   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:45.643567   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:45.643592   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:45.680148   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:45.680183   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:45.732576   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:45.732648   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:45.763213   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:45.763299   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:45.790736   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:45.790804   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:45.802909   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:45.802991   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:45.850168   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:45.850251   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:45.926703   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:45.926741   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:48.486114   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:48.497086   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:48.497160   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:48.525605   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:48.525625   51251 cri.go:89] found id: ""
	I1018 17:42:48.525634   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:48.525690   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:48.529399   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:48.529536   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:48.556240   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:48.556261   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:48.556267   51251 cri.go:89] found id: ""
	I1018 17:42:48.556274   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:48.556331   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:48.560148   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:48.563747   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:48.563816   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:48.591484   51251 cri.go:89] found id: ""
	I1018 17:42:48.591509   51251 logs.go:282] 0 containers: []
	W1018 17:42:48.591518   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:48.591524   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:48.591584   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:48.621441   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:48.621461   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:48.621467   51251 cri.go:89] found id: ""
	I1018 17:42:48.621475   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:48.621531   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:48.625098   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:48.628679   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:48.628776   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:48.655455   51251 cri.go:89] found id: ""
	I1018 17:42:48.655477   51251 logs.go:282] 0 containers: []
	W1018 17:42:48.655486   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:48.655492   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:48.655574   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:48.686750   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:48.686773   51251 cri.go:89] found id: ""
	I1018 17:42:48.686781   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:48.686841   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:48.690841   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:48.690946   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:48.718158   51251 cri.go:89] found id: ""
	I1018 17:42:48.718186   51251 logs.go:282] 0 containers: []
	W1018 17:42:48.718194   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:48.718203   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:48.718213   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:48.823716   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:48.823756   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:48.901683   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:48.892565    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.893314    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.895024    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.895911    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.897573    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:48.892565    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.893314    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.895024    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.895911    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.897573    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:48.901743   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:48.901756   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:48.946710   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:48.946741   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:48.989214   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:48.989249   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:49.018928   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:49.018952   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:49.063728   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:49.063755   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:49.075796   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:49.075823   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:49.107128   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:49.107155   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:49.174004   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:49.174037   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:49.202814   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:49.202883   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:51.788673   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:51.804334   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:51.804402   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:51.832430   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:51.832451   51251 cri.go:89] found id: ""
	I1018 17:42:51.832459   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:51.832517   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:51.836251   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:51.836320   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:51.862897   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:51.862919   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:51.862924   51251 cri.go:89] found id: ""
	I1018 17:42:51.862931   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:51.862985   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:51.866673   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:51.870113   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:51.870200   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:51.895781   51251 cri.go:89] found id: ""
	I1018 17:42:51.895805   51251 logs.go:282] 0 containers: []
	W1018 17:42:51.895813   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:51.895820   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:51.895878   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:51.922494   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:51.922516   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:51.922521   51251 cri.go:89] found id: ""
	I1018 17:42:51.922528   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:51.922581   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:51.926209   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:51.929576   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:51.929673   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:51.956090   51251 cri.go:89] found id: ""
	I1018 17:42:51.956114   51251 logs.go:282] 0 containers: []
	W1018 17:42:51.956122   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:51.956129   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:51.956187   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:51.988490   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:51.988512   51251 cri.go:89] found id: ""
	I1018 17:42:51.988520   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:51.988574   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:51.992080   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:51.992159   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:52.021598   51251 cri.go:89] found id: ""
	I1018 17:42:52.021624   51251 logs.go:282] 0 containers: []
	W1018 17:42:52.021632   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:52.021642   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:52.021655   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:52.117617   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:52.117653   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:52.176829   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:52.177096   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:52.221507   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:52.221581   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:52.290597   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:52.290630   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:52.318933   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:52.318959   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:52.397646   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:52.397679   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:52.429557   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:52.429592   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:52.441410   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:52.441440   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:52.515237   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:52.505394    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.506908    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.507495    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.509107    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.509748    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:52.505394    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.506908    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.507495    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.509107    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.509748    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:52.515259   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:52.515272   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:52.546325   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:52.546352   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:55.073960   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:55.087265   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:55.087396   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:55.118731   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:55.118751   51251 cri.go:89] found id: ""
	I1018 17:42:55.118760   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:55.118827   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:55.122773   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:55.122841   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:55.160245   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:55.160267   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:55.160284   51251 cri.go:89] found id: ""
	I1018 17:42:55.160293   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:55.160353   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:55.164073   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:55.167693   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:55.167805   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:55.194629   51251 cri.go:89] found id: ""
	I1018 17:42:55.194653   51251 logs.go:282] 0 containers: []
	W1018 17:42:55.194661   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:55.194668   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:55.194741   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:55.222517   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:55.222579   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:55.222590   51251 cri.go:89] found id: ""
	I1018 17:42:55.222599   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:55.222655   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:55.226357   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:55.230025   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:55.230092   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:55.263792   51251 cri.go:89] found id: ""
	I1018 17:42:55.263816   51251 logs.go:282] 0 containers: []
	W1018 17:42:55.263824   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:55.263830   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:55.263889   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:55.291220   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:55.291241   51251 cri.go:89] found id: ""
	I1018 17:42:55.291249   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:55.291325   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:55.294934   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:55.295010   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:55.326586   51251 cri.go:89] found id: ""
	I1018 17:42:55.326609   51251 logs.go:282] 0 containers: []
	W1018 17:42:55.326617   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:55.326654   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:55.326671   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:55.401452   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:55.392275    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.393074    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.393930    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.395756    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.396145    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:55.392275    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.393074    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.393930    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.395756    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.396145    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:55.401476   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:55.401489   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:55.447692   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:55.447728   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:55.491129   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:55.491159   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:55.568889   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:55.568926   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:55.604397   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:55.604423   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:55.621149   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:55.621188   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:55.649355   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:55.649383   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:55.703784   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:55.703820   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:55.742564   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:55.742592   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:55.771921   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:55.771952   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:58.379973   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:58.390987   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:58.391064   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:58.420177   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:58.420206   51251 cri.go:89] found id: ""
	I1018 17:42:58.420214   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:58.420280   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:58.423975   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:58.424051   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:58.450210   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:58.450232   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:58.450237   51251 cri.go:89] found id: ""
	I1018 17:42:58.450244   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:58.450302   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:58.454890   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:58.458701   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:58.458770   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:58.483310   51251 cri.go:89] found id: ""
	I1018 17:42:58.483334   51251 logs.go:282] 0 containers: []
	W1018 17:42:58.483342   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:58.483348   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:58.483405   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:58.511930   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:58.511958   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:58.511963   51251 cri.go:89] found id: ""
	I1018 17:42:58.511970   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:58.512025   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:58.515745   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:58.519340   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:58.519409   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:58.546212   51251 cri.go:89] found id: ""
	I1018 17:42:58.546233   51251 logs.go:282] 0 containers: []
	W1018 17:42:58.546250   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:58.546257   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:58.546336   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:58.573991   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:58.574011   51251 cri.go:89] found id: ""
	I1018 17:42:58.574019   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:58.574073   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:58.577989   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:58.578068   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:58.609463   51251 cri.go:89] found id: ""
	I1018 17:42:58.609485   51251 logs.go:282] 0 containers: []
	W1018 17:42:58.609493   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:58.609520   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:58.609542   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:58.623900   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:58.623929   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:58.672129   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:58.672159   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:58.702420   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:58.702447   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:58.739914   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:58.739941   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:58.840389   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:58.840423   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:58.904498   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:58.896431    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.896966    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.898915    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.899719    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.901011    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:58.896431    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.896966    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.898915    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.899719    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.901011    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:58.904519   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:58.904534   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:58.933888   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:58.933915   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:58.967554   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:58.967628   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:59.028427   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:59.028504   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:59.054221   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:59.054249   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:01.639025   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:01.651715   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:01.651793   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:01.685240   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:01.685309   51251 cri.go:89] found id: ""
	I1018 17:43:01.685339   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:01.685423   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:01.690385   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:01.690468   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:01.719962   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:01.720035   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:01.720055   51251 cri.go:89] found id: ""
	I1018 17:43:01.720076   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:01.720148   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:01.723990   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:01.727538   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:01.727607   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:01.756529   51251 cri.go:89] found id: ""
	I1018 17:43:01.756562   51251 logs.go:282] 0 containers: []
	W1018 17:43:01.756571   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:01.756595   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:01.756676   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:01.789556   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:01.789581   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:01.789586   51251 cri.go:89] found id: ""
	I1018 17:43:01.789594   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:01.789659   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:01.794374   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:01.798060   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:01.798129   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:01.833059   51251 cri.go:89] found id: ""
	I1018 17:43:01.833089   51251 logs.go:282] 0 containers: []
	W1018 17:43:01.833097   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:01.833103   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:01.833172   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:01.860988   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:01.861009   51251 cri.go:89] found id: ""
	I1018 17:43:01.861017   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:01.861076   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:01.865838   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:01.865913   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:01.893009   51251 cri.go:89] found id: ""
	I1018 17:43:01.893035   51251 logs.go:282] 0 containers: []
	W1018 17:43:01.893043   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:01.893052   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:01.893064   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:01.997703   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:01.997739   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:02.060549   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:02.060581   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:02.094970   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:02.095001   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:02.161721   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:02.161757   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:02.209000   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:02.209029   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:02.239896   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:02.239920   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:02.275701   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:02.275727   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:02.288373   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:02.288400   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:02.360448   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:02.351719    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.352549    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.354058    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.354626    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.356320    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:02.351719    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.352549    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.354058    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.354626    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.356320    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:02.360469   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:02.360481   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:02.390739   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:02.390769   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:04.978257   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:04.988916   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:04.989037   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:05.019550   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:05.019573   51251 cri.go:89] found id: ""
	I1018 17:43:05.019582   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:05.019646   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:05.023992   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:05.024069   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:05.050514   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:05.050533   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:05.050538   51251 cri.go:89] found id: ""
	I1018 17:43:05.050546   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:05.050601   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:05.054386   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:05.058083   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:05.058155   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:05.093052   51251 cri.go:89] found id: ""
	I1018 17:43:05.093079   51251 logs.go:282] 0 containers: []
	W1018 17:43:05.093088   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:05.093096   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:05.093200   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:05.124045   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:05.124115   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:05.124134   51251 cri.go:89] found id: ""
	I1018 17:43:05.124156   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:05.124238   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:05.129085   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:05.134571   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:05.134649   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:05.162401   51251 cri.go:89] found id: ""
	I1018 17:43:05.162423   51251 logs.go:282] 0 containers: []
	W1018 17:43:05.162432   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:05.162439   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:05.162505   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:05.191429   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:05.191451   51251 cri.go:89] found id: ""
	I1018 17:43:05.191459   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:05.191513   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:05.195222   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:05.195291   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:05.233765   51251 cri.go:89] found id: ""
	I1018 17:43:05.233789   51251 logs.go:282] 0 containers: []
	W1018 17:43:05.233797   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:05.233813   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:05.233824   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:05.314015   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:05.314049   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:05.343775   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:05.343799   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:05.447678   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:05.447715   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:05.461224   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:05.461251   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:05.531644   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:05.521503    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.523802    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.525607    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.526297    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.527849    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:05.521503    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.523802    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.525607    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.526297    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.527849    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:05.531668   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:05.531681   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:05.589572   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:05.589609   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:05.620844   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:05.620871   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:05.649833   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:05.649861   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:05.702301   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:05.702335   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:05.746579   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:05.746612   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:08.279428   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:08.290505   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:08.290572   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:08.323196   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:08.323217   51251 cri.go:89] found id: ""
	I1018 17:43:08.323225   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:08.323287   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:08.326970   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:08.327042   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:08.353811   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:08.353833   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:08.353837   51251 cri.go:89] found id: ""
	I1018 17:43:08.353845   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:08.353903   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:08.357796   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:08.361798   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:08.361874   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:08.390063   51251 cri.go:89] found id: ""
	I1018 17:43:08.390086   51251 logs.go:282] 0 containers: []
	W1018 17:43:08.390094   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:08.390104   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:08.390164   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:08.417117   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:08.417137   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:08.417142   51251 cri.go:89] found id: ""
	I1018 17:43:08.417153   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:08.417209   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:08.421291   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:08.424803   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:08.424875   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:08.450383   51251 cri.go:89] found id: ""
	I1018 17:43:08.450405   51251 logs.go:282] 0 containers: []
	W1018 17:43:08.450412   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:08.450419   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:08.450517   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:08.475291   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:08.475312   51251 cri.go:89] found id: ""
	I1018 17:43:08.475321   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:08.475376   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:08.479043   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:08.479113   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:08.509786   51251 cri.go:89] found id: ""
	I1018 17:43:08.509809   51251 logs.go:282] 0 containers: []
	W1018 17:43:08.509817   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:08.509826   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:08.509838   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:08.605996   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:08.606031   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:08.622166   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:08.622201   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:08.702891   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:08.692116    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.693186    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.694251    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.694895    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.697165    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:08.692116    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.693186    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.694251    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.694895    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.697165    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:08.702955   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:08.702973   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:08.732447   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:08.732474   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:08.759641   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:08.759667   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:08.790348   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:08.790378   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:08.821468   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:08.821493   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:08.873070   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:08.873109   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:08.906030   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:08.906070   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:08.964907   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:08.964966   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:11.547663   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:11.559867   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:11.559932   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:11.595124   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:11.595143   51251 cri.go:89] found id: ""
	I1018 17:43:11.595151   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:11.595209   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.599553   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:11.599619   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:11.639738   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:11.639820   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:11.639844   51251 cri.go:89] found id: ""
	I1018 17:43:11.639865   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:11.639950   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.646442   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.651648   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:11.651787   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:11.695203   51251 cri.go:89] found id: ""
	I1018 17:43:11.695286   51251 logs.go:282] 0 containers: []
	W1018 17:43:11.695316   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:11.695337   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:11.695418   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:11.744347   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:11.744416   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:11.744441   51251 cri.go:89] found id: ""
	I1018 17:43:11.744463   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:11.744558   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.751191   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.755958   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:11.756105   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:11.791266   51251 cri.go:89] found id: ""
	I1018 17:43:11.791331   51251 logs.go:282] 0 containers: []
	W1018 17:43:11.791353   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:11.791383   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:11.791474   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:11.834876   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:11.834963   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:11.834989   51251 cri.go:89] found id: ""
	I1018 17:43:11.835011   51251 logs.go:282] 2 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:11.835086   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.841198   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.846580   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:11.846715   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:11.897749   51251 cri.go:89] found id: ""
	I1018 17:43:11.897822   51251 logs.go:282] 0 containers: []
	W1018 17:43:11.897846   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:11.897881   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:11.897928   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:11.943452   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:11.943536   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:12.005227   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:12.005338   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:12.062557   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:12.062624   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:12.182021   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:12.182095   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:12.197845   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:12.197920   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:12.260741   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:12.260817   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:12.335387   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:12.335466   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:12.369750   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:12.369775   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:12.449888   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:12.449923   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:12.545478   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:12.535379    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.536014    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.539746    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.540245    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.541774    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:12.535379    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.536014    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.539746    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.540245    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.541774    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:12.545496   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:12.545509   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:12.577372   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:12.577397   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:15.116790   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:15.132080   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:15.132161   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:15.159487   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:15.159506   51251 cri.go:89] found id: ""
	I1018 17:43:15.159515   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:15.159567   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.163178   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:15.163272   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:15.191277   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:15.191296   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:15.191300   51251 cri.go:89] found id: ""
	I1018 17:43:15.191315   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:15.191372   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.195019   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.198423   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:15.198491   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:15.225886   51251 cri.go:89] found id: ""
	I1018 17:43:15.225910   51251 logs.go:282] 0 containers: []
	W1018 17:43:15.225919   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:15.225925   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:15.225986   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:15.251392   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:15.251414   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:15.251419   51251 cri.go:89] found id: ""
	I1018 17:43:15.251426   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:15.251480   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.255201   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.258787   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:15.258880   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:15.285767   51251 cri.go:89] found id: ""
	I1018 17:43:15.285831   51251 logs.go:282] 0 containers: []
	W1018 17:43:15.285854   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:15.285878   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:15.285951   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:15.316160   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:15.316219   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:15.316239   51251 cri.go:89] found id: ""
	I1018 17:43:15.316261   51251 logs.go:282] 2 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:15.316333   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.320128   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.323596   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:15.323665   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:15.349496   51251 cri.go:89] found id: ""
	I1018 17:43:15.349522   51251 logs.go:282] 0 containers: []
	W1018 17:43:15.349531   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:15.349541   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:15.349569   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:15.420881   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:15.420916   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:15.451259   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:15.451285   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:15.548698   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:15.548740   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:15.561517   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:15.561546   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:15.608036   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:15.608071   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:15.641405   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:15.641431   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:15.668198   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:15.668226   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:15.694563   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:15.694591   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:15.770902   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:15.770936   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:15.836895   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:15.828987    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.829667    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.831325    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.831865    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.833343    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:15.828987    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.829667    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.831325    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.831865    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.833343    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:15.836919   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:15.836931   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:15.865888   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:15.865916   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:18.408468   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:18.419326   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:18.419393   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:18.443753   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:18.443775   51251 cri.go:89] found id: ""
	I1018 17:43:18.443783   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:18.443839   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.447404   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:18.447481   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:18.473566   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:18.473627   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:18.473639   51251 cri.go:89] found id: ""
	I1018 17:43:18.473647   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:18.473702   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.477524   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.481293   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:18.481397   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:18.507887   51251 cri.go:89] found id: ""
	I1018 17:43:18.507965   51251 logs.go:282] 0 containers: []
	W1018 17:43:18.507991   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:18.508011   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:18.508082   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:18.534789   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:18.534809   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:18.534814   51251 cri.go:89] found id: ""
	I1018 17:43:18.534821   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:18.534876   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.538531   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.542059   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:18.542133   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:18.567277   51251 cri.go:89] found id: ""
	I1018 17:43:18.567299   51251 logs.go:282] 0 containers: []
	W1018 17:43:18.567307   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:18.567316   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:18.567375   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:18.593882   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:18.593902   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:18.593907   51251 cri.go:89] found id: ""
	I1018 17:43:18.593914   51251 logs.go:282] 2 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:18.593971   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.598057   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.601482   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:18.601548   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:18.626724   51251 cri.go:89] found id: ""
	I1018 17:43:18.626748   51251 logs.go:282] 0 containers: []
	W1018 17:43:18.626756   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:18.626766   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:18.626777   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:18.720186   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:18.720220   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:18.732342   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:18.732372   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:18.777781   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:18.777813   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:18.814519   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:18.814548   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:18.842102   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:18.842129   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:18.870191   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:18.870215   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:18.940137   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:18.931877    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.932545    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.934242    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.934870    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.936368    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:18.931877    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.932545    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.934242    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.934870    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.936368    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:18.940159   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:18.940171   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:18.972118   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:18.972143   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:19.028698   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:19.028731   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:19.053561   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:19.053588   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:19.134177   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:19.134210   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:21.666074   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:21.677905   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:21.677982   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:21.710449   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:21.710470   51251 cri.go:89] found id: ""
	I1018 17:43:21.710479   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:21.710534   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.714253   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:21.714326   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:21.741478   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:21.741547   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:21.741558   51251 cri.go:89] found id: ""
	I1018 17:43:21.741566   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:21.741627   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.745535   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.750022   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:21.750140   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:21.780635   51251 cri.go:89] found id: ""
	I1018 17:43:21.780708   51251 logs.go:282] 0 containers: []
	W1018 17:43:21.780731   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:21.780778   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:21.780856   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:21.808496   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:21.808514   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:21.808518   51251 cri.go:89] found id: ""
	I1018 17:43:21.808525   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:21.808582   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.812401   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.815810   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:21.815876   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:21.845624   51251 cri.go:89] found id: ""
	I1018 17:43:21.845657   51251 logs.go:282] 0 containers: []
	W1018 17:43:21.845665   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:21.845672   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:21.845731   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:21.871314   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:21.871332   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:21.871336   51251 cri.go:89] found id: ""
	I1018 17:43:21.871343   51251 logs.go:282] 2 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:21.871399   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.875259   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.878771   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:21.878839   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:21.913289   51251 cri.go:89] found id: ""
	I1018 17:43:21.913312   51251 logs.go:282] 0 containers: []
	W1018 17:43:21.913321   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:21.913330   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:21.913341   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:21.990540   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:21.990577   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:22.023215   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:22.023243   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:22.053561   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:22.053588   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:22.081164   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:22.081191   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:22.145177   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:22.145212   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:22.184829   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:22.184859   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:22.228057   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:22.228081   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:22.316019   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:22.316053   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:22.347876   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:22.347901   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:22.450507   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:22.450541   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:22.462429   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:22.462456   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:22.536495   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:22.527657    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.528744    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.530446    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.531068    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.532737    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:22.527657    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.528744    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.530446    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.531068    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.532737    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:25.036723   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:25.048068   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:25.048137   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:25.074496   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:25.074517   51251 cri.go:89] found id: ""
	I1018 17:43:25.074525   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:25.074581   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:25.078699   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:25.078775   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:25.106068   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:25.106088   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:25.106092   51251 cri.go:89] found id: ""
	I1018 17:43:25.106099   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:25.106154   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:25.109911   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:25.116299   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:25.116392   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:25.152465   51251 cri.go:89] found id: ""
	I1018 17:43:25.152545   51251 logs.go:282] 0 containers: []
	W1018 17:43:25.152568   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:25.152587   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:25.152679   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:25.179667   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:25.179690   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:25.179695   51251 cri.go:89] found id: ""
	I1018 17:43:25.179703   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:25.179762   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:25.183571   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:25.187316   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:25.187431   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:25.216762   51251 cri.go:89] found id: ""
	I1018 17:43:25.216796   51251 logs.go:282] 0 containers: []
	W1018 17:43:25.216805   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:25.216812   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:25.216871   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:25.244556   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:25.244578   51251 cri.go:89] found id: ""
	I1018 17:43:25.244587   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:25.244642   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:25.248407   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:25.248485   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:25.274854   51251 cri.go:89] found id: ""
	I1018 17:43:25.274879   51251 logs.go:282] 0 containers: []
	W1018 17:43:25.274888   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:25.274897   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:25.274908   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:25.331118   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:25.331153   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:25.411446   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:25.411478   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:25.462440   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:25.462467   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:25.525297   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:25.525373   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:25.555066   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:25.555092   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:25.581528   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:25.581558   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:25.682424   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:25.682461   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:25.695456   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:25.695486   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:25.766142   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:25.757215    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.757999    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.759442    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.759856    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.761265    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:25.757215    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.757999    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.759442    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.759856    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.761265    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:25.766162   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:25.766174   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:25.795404   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:25.795433   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:28.337726   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:28.348255   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:28.348338   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:28.382821   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:28.382841   51251 cri.go:89] found id: ""
	I1018 17:43:28.382849   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:28.382903   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:28.386571   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:28.386653   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:28.418956   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:28.418976   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:28.418981   51251 cri.go:89] found id: ""
	I1018 17:43:28.418988   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:28.419041   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:28.422637   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:28.426047   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:28.426115   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:28.450805   51251 cri.go:89] found id: ""
	I1018 17:43:28.450826   51251 logs.go:282] 0 containers: []
	W1018 17:43:28.450834   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:28.450841   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:28.450897   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:28.476049   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:28.476069   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:28.476075   51251 cri.go:89] found id: ""
	I1018 17:43:28.476083   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:28.476137   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:28.479674   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:28.483214   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:28.483280   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:28.509438   51251 cri.go:89] found id: ""
	I1018 17:43:28.509460   51251 logs.go:282] 0 containers: []
	W1018 17:43:28.509468   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:28.509475   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:28.509531   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:28.536762   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:28.536783   51251 cri.go:89] found id: ""
	I1018 17:43:28.536791   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:28.536846   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:28.540786   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:28.540849   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:28.566044   51251 cri.go:89] found id: ""
	I1018 17:43:28.566066   51251 logs.go:282] 0 containers: []
	W1018 17:43:28.566076   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:28.566085   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:28.566126   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:28.668507   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:28.668548   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:28.696140   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:28.696166   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:28.742992   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:28.743028   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:28.773720   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:28.773749   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:28.800871   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:28.800897   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:28.812516   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:28.812544   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:28.881394   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:28.872850    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.873551    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.875119    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.875694    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.877437    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:28.872850    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.873551    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.875119    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.875694    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.877437    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:28.881466   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:28.881493   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:28.920319   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:28.920351   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:29.001463   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:29.001501   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:29.080673   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:29.080705   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:31.615872   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:31.627104   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:31.627173   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:31.652790   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:31.652812   51251 cri.go:89] found id: ""
	I1018 17:43:31.652820   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:31.652880   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:31.656835   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:31.656905   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:31.684663   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:31.684685   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:31.684690   51251 cri.go:89] found id: ""
	I1018 17:43:31.684698   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:31.684752   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:31.688556   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:31.692271   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:31.692343   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:31.720037   51251 cri.go:89] found id: ""
	I1018 17:43:31.720059   51251 logs.go:282] 0 containers: []
	W1018 17:43:31.720067   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:31.720074   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:31.720130   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:31.745058   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:31.745078   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:31.745083   51251 cri.go:89] found id: ""
	I1018 17:43:31.745090   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:31.745144   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:31.748688   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:31.752002   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:31.752068   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:31.780253   51251 cri.go:89] found id: ""
	I1018 17:43:31.780275   51251 logs.go:282] 0 containers: []
	W1018 17:43:31.780283   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:31.780289   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:31.780346   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:31.806333   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:31.806358   51251 cri.go:89] found id: ""
	I1018 17:43:31.806365   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:31.806429   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:31.810331   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:31.810403   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:31.836140   51251 cri.go:89] found id: ""
	I1018 17:43:31.836205   51251 logs.go:282] 0 containers: []
	W1018 17:43:31.836227   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:31.836250   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:31.836292   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:31.874437   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:31.874512   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:31.901146   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:31.901171   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:31.998418   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:31.998452   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:32.014569   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:32.014606   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:32.063231   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:32.063266   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:32.130021   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:32.130061   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:32.160724   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:32.160761   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:32.239135   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:32.239173   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:32.285504   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:32.285531   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:32.361004   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:32.352916    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.353683    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.355270    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.355600    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.357143    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:32.352916    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.353683    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.355270    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.355600    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.357143    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:32.361029   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:32.361042   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:34.888854   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:34.901112   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:34.901187   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:34.929962   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:34.929982   51251 cri.go:89] found id: ""
	I1018 17:43:34.929990   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:34.930044   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:34.933771   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:34.933840   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:34.974958   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:34.974990   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:34.974994   51251 cri.go:89] found id: ""
	I1018 17:43:34.975002   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:34.975063   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:34.979007   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:34.982588   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:34.982669   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:35.025772   51251 cri.go:89] found id: ""
	I1018 17:43:35.025794   51251 logs.go:282] 0 containers: []
	W1018 17:43:35.025802   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:35.025808   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:35.025867   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:35.054583   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:35.054606   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:35.054611   51251 cri.go:89] found id: ""
	I1018 17:43:35.054619   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:35.054683   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:35.058624   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:35.062166   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:35.062249   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:35.099459   51251 cri.go:89] found id: ""
	I1018 17:43:35.099482   51251 logs.go:282] 0 containers: []
	W1018 17:43:35.099490   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:35.099497   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:35.099553   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:35.135905   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:35.135927   51251 cri.go:89] found id: ""
	I1018 17:43:35.135936   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:35.135993   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:35.139558   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:35.139675   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:35.167854   51251 cri.go:89] found id: ""
	I1018 17:43:35.167877   51251 logs.go:282] 0 containers: []
	W1018 17:43:35.167886   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:35.167895   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:35.167906   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:35.268911   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:35.268953   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:35.351239   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:35.342070    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.342707    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.344447    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.345185    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.346039    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:35.342070    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.342707    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.344447    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.345185    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.346039    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:35.351259   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:35.351271   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:35.414894   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:35.414928   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:35.449804   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:35.449834   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:35.506409   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:35.506445   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:35.595870   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:35.595911   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:35.608335   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:35.608364   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:35.639546   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:35.639574   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:35.667961   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:35.667987   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:35.698739   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:35.698763   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:38.237278   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:38.248092   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:38.248161   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:38.274867   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:38.274888   51251 cri.go:89] found id: ""
	I1018 17:43:38.274896   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:38.274965   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:38.278707   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:38.278774   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:38.304232   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:38.304252   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:38.304256   51251 cri.go:89] found id: ""
	I1018 17:43:38.304264   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:38.304317   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:38.309670   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:38.313425   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:38.313497   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:38.344118   51251 cri.go:89] found id: ""
	I1018 17:43:38.344140   51251 logs.go:282] 0 containers: []
	W1018 17:43:38.344149   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:38.344156   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:38.344214   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:38.376271   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:38.376294   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:38.376298   51251 cri.go:89] found id: ""
	I1018 17:43:38.376316   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:38.376373   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:38.380454   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:38.384255   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:38.384326   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:38.409931   51251 cri.go:89] found id: ""
	I1018 17:43:38.409955   51251 logs.go:282] 0 containers: []
	W1018 17:43:38.409963   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:38.409977   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:38.410038   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:38.436568   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:38.436591   51251 cri.go:89] found id: ""
	I1018 17:43:38.436600   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:38.436672   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:38.440383   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:38.440477   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:38.468084   51251 cri.go:89] found id: ""
	I1018 17:43:38.468161   51251 logs.go:282] 0 containers: []
	W1018 17:43:38.468184   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:38.468206   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:38.468228   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:38.565168   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:38.565204   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:38.577269   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:38.577297   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:38.646729   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:38.638445    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.639186    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.640793    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.641395    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.643175    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:38.638445    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.639186    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.640793    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.641395    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.643175    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:38.646754   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:38.646768   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:38.673481   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:38.673507   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:38.719835   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:38.719871   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:38.752322   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:38.752362   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:38.783579   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:38.783606   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:38.820293   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:38.820322   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:38.878730   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:38.878761   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:38.907670   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:38.907740   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:41.489854   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:41.500771   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:41.500872   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:41.526674   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:41.526696   51251 cri.go:89] found id: ""
	I1018 17:43:41.526706   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:41.526770   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:41.531078   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:41.531191   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:41.562796   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:41.562823   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:41.562829   51251 cri.go:89] found id: ""
	I1018 17:43:41.562837   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:41.562959   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:41.566913   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:41.570998   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:41.571118   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:41.597622   51251 cri.go:89] found id: ""
	I1018 17:43:41.597647   51251 logs.go:282] 0 containers: []
	W1018 17:43:41.597655   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:41.597662   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:41.597720   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:41.627549   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:41.627570   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:41.627575   51251 cri.go:89] found id: ""
	I1018 17:43:41.627583   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:41.627642   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:41.631299   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:41.635563   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:41.635662   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:41.662146   51251 cri.go:89] found id: ""
	I1018 17:43:41.662170   51251 logs.go:282] 0 containers: []
	W1018 17:43:41.662179   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:41.662185   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:41.662244   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:41.693012   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:41.693038   51251 cri.go:89] found id: ""
	I1018 17:43:41.693047   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:41.693132   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:41.697195   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:41.697265   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:41.729826   51251 cri.go:89] found id: ""
	I1018 17:43:41.729850   51251 logs.go:282] 0 containers: []
	W1018 17:43:41.729859   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:41.729869   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:41.729880   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:41.828078   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:41.828110   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:41.901435   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:41.892987    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.893726    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.895255    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.895832    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.897510    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:41.892987    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.893726    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.895255    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.895832    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.897510    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:41.901459   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:41.901472   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:41.929914   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:41.929989   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:41.987757   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:41.987802   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:42.039791   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:42.039830   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:42.075456   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:42.075487   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:42.149099   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:42.149132   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:42.164617   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:42.164650   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:42.257289   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:42.257327   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:42.287081   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:42.287112   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:44.874333   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:44.884870   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:44.884968   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:44.912153   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:44.912175   51251 cri.go:89] found id: ""
	I1018 17:43:44.912183   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:44.912237   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:44.915849   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:44.915919   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:44.942584   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:44.942604   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:44.942609   51251 cri.go:89] found id: ""
	I1018 17:43:44.942616   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:44.942668   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:44.946463   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:44.949841   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:44.949907   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:44.986621   51251 cri.go:89] found id: ""
	I1018 17:43:44.986646   51251 logs.go:282] 0 containers: []
	W1018 17:43:44.986654   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:44.986661   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:44.986718   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:45.029811   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:45.029830   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:45.029835   51251 cri.go:89] found id: ""
	I1018 17:43:45.029843   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:45.029908   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:45.035692   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:45.040000   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:45.040078   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:45.098723   51251 cri.go:89] found id: ""
	I1018 17:43:45.098751   51251 logs.go:282] 0 containers: []
	W1018 17:43:45.098760   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:45.098770   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:45.098843   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:45.162198   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:45.162228   51251 cri.go:89] found id: ""
	I1018 17:43:45.162238   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:45.162307   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:45.167619   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:45.167700   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:45.211984   51251 cri.go:89] found id: ""
	I1018 17:43:45.212008   51251 logs.go:282] 0 containers: []
	W1018 17:43:45.212018   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:45.212028   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:45.212041   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:45.226821   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:45.226851   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:45.337585   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:45.321955    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.322823    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.324086    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.327115    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.329027    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:45.321955    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.322823    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.324086    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.327115    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.329027    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:45.337625   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:45.337641   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:45.377460   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:45.377491   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:45.429187   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:45.429222   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:45.457994   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:45.458022   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:45.540761   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:45.540797   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:45.573633   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:45.573662   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:45.672580   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:45.672617   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:45.706688   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:45.706720   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:45.783083   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:45.783120   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:48.314260   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:48.324891   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:48.324985   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:48.357904   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:48.357927   51251 cri.go:89] found id: ""
	I1018 17:43:48.357940   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:48.357997   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:48.362392   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:48.362474   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:48.397905   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:48.397927   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:48.397932   51251 cri.go:89] found id: ""
	I1018 17:43:48.397940   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:48.397993   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:48.401719   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:48.404922   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:48.405019   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:48.431573   51251 cri.go:89] found id: ""
	I1018 17:43:48.431598   51251 logs.go:282] 0 containers: []
	W1018 17:43:48.431606   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:48.431613   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:48.431673   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:48.458728   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:48.458755   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:48.458760   51251 cri.go:89] found id: ""
	I1018 17:43:48.458767   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:48.458824   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:48.462488   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:48.465841   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:48.465909   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:48.491719   51251 cri.go:89] found id: ""
	I1018 17:43:48.491741   51251 logs.go:282] 0 containers: []
	W1018 17:43:48.491749   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:48.491755   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:48.491815   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:48.522124   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:48.522189   51251 cri.go:89] found id: ""
	I1018 17:43:48.522211   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:48.522292   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:48.526320   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:48.526407   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:48.552413   51251 cri.go:89] found id: ""
	I1018 17:43:48.552436   51251 logs.go:282] 0 containers: []
	W1018 17:43:48.552445   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:48.552454   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:48.552471   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:48.647083   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:48.647114   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:48.660735   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:48.660768   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:48.690812   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:48.690837   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:48.721178   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:48.721208   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:48.748549   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:48.748617   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:48.823598   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:48.823637   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:48.855654   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:48.855680   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:48.931642   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:48.922606    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.923296    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.925195    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.925885    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.928154    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:48.922606    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.923296    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.925195    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.925885    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.928154    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:48.931664   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:48.931678   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:48.984964   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:48.985003   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:49.022359   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:49.022391   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:51.581690   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:51.592535   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:51.592618   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:51.621442   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:51.621470   51251 cri.go:89] found id: ""
	I1018 17:43:51.621479   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:51.621535   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:51.625435   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:51.625513   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:51.653328   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:51.653354   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:51.653360   51251 cri.go:89] found id: ""
	I1018 17:43:51.653367   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:51.653425   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:51.657372   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:51.660911   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:51.661083   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:51.687435   51251 cri.go:89] found id: ""
	I1018 17:43:51.687456   51251 logs.go:282] 0 containers: []
	W1018 17:43:51.687465   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:51.687472   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:51.687533   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:51.716167   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:51.716189   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:51.716194   51251 cri.go:89] found id: ""
	I1018 17:43:51.716201   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:51.716256   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:51.719950   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:51.723494   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:51.723575   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:51.752147   51251 cri.go:89] found id: ""
	I1018 17:43:51.752171   51251 logs.go:282] 0 containers: []
	W1018 17:43:51.752180   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:51.752186   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:51.752245   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:51.779213   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:51.779236   51251 cri.go:89] found id: ""
	I1018 17:43:51.779244   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:51.779305   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:51.782913   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:51.782986   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:51.810202   51251 cri.go:89] found id: ""
	I1018 17:43:51.810228   51251 logs.go:282] 0 containers: []
	W1018 17:43:51.810236   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:51.810246   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:51.810258   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:51.824029   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:51.824058   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:51.894919   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:51.886698    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.887712    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.889389    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.889843    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.891356    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:51.886698    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.887712    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.889389    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.889843    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.891356    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:51.894983   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:51.895002   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:51.955232   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:51.955263   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:51.990622   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:51.990651   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:52.020376   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:52.020405   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:52.066713   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:52.066740   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:52.172061   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:52.172103   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:52.214913   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:52.214938   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:52.251763   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:52.251854   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:52.311510   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:52.311541   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:54.894390   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:54.907290   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:54.907366   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:54.940172   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:54.940196   51251 cri.go:89] found id: ""
	I1018 17:43:54.940204   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:54.940260   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:54.943992   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:54.944086   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:54.978188   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:54.978210   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:54.978214   51251 cri.go:89] found id: ""
	I1018 17:43:54.978222   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:54.978282   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:54.982194   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:54.986022   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:54.986121   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:55.029209   51251 cri.go:89] found id: ""
	I1018 17:43:55.029239   51251 logs.go:282] 0 containers: []
	W1018 17:43:55.029248   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:55.029256   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:55.029318   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:55.057246   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:55.057271   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:55.057276   51251 cri.go:89] found id: ""
	I1018 17:43:55.057283   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:55.057336   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:55.061051   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:55.064367   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:55.064436   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:55.095243   51251 cri.go:89] found id: ""
	I1018 17:43:55.095307   51251 logs.go:282] 0 containers: []
	W1018 17:43:55.095329   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:55.095341   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:55.095399   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:55.122785   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:55.122804   51251 cri.go:89] found id: ""
	I1018 17:43:55.122813   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:55.122876   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:55.132639   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:55.132738   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:55.162942   51251 cri.go:89] found id: ""
	I1018 17:43:55.162977   51251 logs.go:282] 0 containers: []
	W1018 17:43:55.162986   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:55.163011   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:55.163032   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:55.228280   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:55.228312   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:55.259473   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:55.259500   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:55.292185   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:55.292220   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:55.341717   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:55.341749   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:55.375698   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:55.375727   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:55.402916   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:55.402942   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:55.490846   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:55.490886   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:55.587437   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:55.587478   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:55.600254   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:55.600280   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:55.666266   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:55.657772    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.658733    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.660294    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.660924    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.662498    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:55.657772    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.658733    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.660294    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.660924    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.662498    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:55.666289   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:55.666311   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:58.191608   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:58.207197   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:58.207266   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:58.241572   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:58.241593   51251 cri.go:89] found id: ""
	I1018 17:43:58.241602   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:58.241656   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:58.245301   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:58.245380   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:58.275809   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:58.275830   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:58.275835   51251 cri.go:89] found id: ""
	I1018 17:43:58.275842   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:58.275898   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:58.279806   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:58.283389   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:58.283459   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:58.312440   51251 cri.go:89] found id: ""
	I1018 17:43:58.312464   51251 logs.go:282] 0 containers: []
	W1018 17:43:58.312472   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:58.312479   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:58.312535   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:58.341315   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:58.341341   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:58.341346   51251 cri.go:89] found id: ""
	I1018 17:43:58.341354   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:58.341418   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:58.345155   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:58.348837   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:58.348906   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:58.375741   51251 cri.go:89] found id: ""
	I1018 17:43:58.375811   51251 logs.go:282] 0 containers: []
	W1018 17:43:58.375843   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:58.375861   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:58.375951   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:58.402340   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:58.402361   51251 cri.go:89] found id: ""
	I1018 17:43:58.402369   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:58.402424   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:58.406046   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:58.406112   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:58.430628   51251 cri.go:89] found id: ""
	I1018 17:43:58.430701   51251 logs.go:282] 0 containers: []
	W1018 17:43:58.430717   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:58.430727   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:58.430737   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:58.524428   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:58.524462   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:58.581885   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:58.581916   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:58.611949   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:58.611979   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:58.693414   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:58.693450   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:58.705470   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:58.705496   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:58.771817   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:58.763821    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.764175    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.765665    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.766083    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.767558    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:58.763821    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.764175    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.765665    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.766083    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.767558    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:58.771836   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:58.771847   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:58.798225   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:58.798252   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:58.848969   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:58.849000   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:58.887826   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:58.887856   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:58.914297   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:58.914322   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:01.448548   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:01.459433   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:01.459507   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:01.490534   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:01.490566   51251 cri.go:89] found id: ""
	I1018 17:44:01.490575   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:01.490649   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:01.494451   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:01.494547   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:01.522081   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:01.522104   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:01.522109   51251 cri.go:89] found id: ""
	I1018 17:44:01.522117   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:01.522175   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:01.526069   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:01.529977   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:01.530054   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:01.557411   51251 cri.go:89] found id: ""
	I1018 17:44:01.557433   51251 logs.go:282] 0 containers: []
	W1018 17:44:01.557442   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:01.557448   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:01.557508   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:01.585118   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:01.585142   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:01.585147   51251 cri.go:89] found id: ""
	I1018 17:44:01.585155   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:01.585218   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:01.588900   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:01.592735   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:01.592820   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:01.621026   51251 cri.go:89] found id: ""
	I1018 17:44:01.621098   51251 logs.go:282] 0 containers: []
	W1018 17:44:01.621121   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:01.621140   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:01.621227   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:01.649479   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:01.649503   51251 cri.go:89] found id: ""
	I1018 17:44:01.649512   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:01.649576   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:01.653509   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:01.653601   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:01.680380   51251 cri.go:89] found id: ""
	I1018 17:44:01.680405   51251 logs.go:282] 0 containers: []
	W1018 17:44:01.680413   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:01.680445   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:01.680470   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:01.719413   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:01.719445   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:01.778065   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:01.778113   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:01.863062   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:01.863098   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:01.933290   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:01.925181    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.926041    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.926645    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.928011    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.928516    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:01.925181    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.926041    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.926645    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.928011    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.928516    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:01.933312   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:01.933325   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:01.994141   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:01.994175   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:02.027406   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:02.027433   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:02.058305   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:02.058374   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:02.089161   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:02.089238   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:02.197504   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:02.197547   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:02.220679   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:02.220704   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:04.749655   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:04.761329   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:04.761399   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:04.791310   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:04.791330   51251 cri.go:89] found id: ""
	I1018 17:44:04.791338   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:04.791391   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:04.795236   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:04.795315   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:04.826977   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:04.826999   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:04.827004   51251 cri.go:89] found id: ""
	I1018 17:44:04.827012   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:04.827071   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:04.831056   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:04.834547   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:04.834619   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:04.861994   51251 cri.go:89] found id: ""
	I1018 17:44:04.862019   51251 logs.go:282] 0 containers: []
	W1018 17:44:04.862028   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:04.862036   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:04.862093   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:04.891547   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:04.891568   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:04.891573   51251 cri.go:89] found id: ""
	I1018 17:44:04.891580   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:04.891664   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:04.895286   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:04.898803   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:04.898879   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:04.925892   51251 cri.go:89] found id: ""
	I1018 17:44:04.925917   51251 logs.go:282] 0 containers: []
	W1018 17:44:04.925925   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:04.925932   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:04.925992   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:04.950898   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:04.950920   51251 cri.go:89] found id: ""
	I1018 17:44:04.950937   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:04.950992   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:04.954458   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:04.954524   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:04.985795   51251 cri.go:89] found id: ""
	I1018 17:44:04.985818   51251 logs.go:282] 0 containers: []
	W1018 17:44:04.985826   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:04.985845   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:04.985857   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:05.039846   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:05.039880   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:05.074700   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:05.074733   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:05.123696   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:05.123722   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:05.162141   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:05.162168   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:05.233397   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:05.233431   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:05.260751   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:05.260780   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:05.342549   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:05.342585   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:05.374809   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:05.374833   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:05.480225   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:05.480260   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:05.492409   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:05.492433   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:05.563815   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:05.554079    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.554775    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.556564    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.557183    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.558926    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:05.554079    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.554775    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.556564    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.557183    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.558926    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:08.065115   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:08.076338   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:08.076434   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:08.104997   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:08.105072   51251 cri.go:89] found id: ""
	I1018 17:44:08.105096   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:08.105171   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:08.109342   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:08.109473   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:08.142036   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:08.142059   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:08.142063   51251 cri.go:89] found id: ""
	I1018 17:44:08.142071   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:08.142127   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:08.145811   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:08.149071   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:08.149138   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:08.178455   51251 cri.go:89] found id: ""
	I1018 17:44:08.178476   51251 logs.go:282] 0 containers: []
	W1018 17:44:08.178485   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:08.178491   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:08.178547   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:08.211837   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:08.211858   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:08.211862   51251 cri.go:89] found id: ""
	I1018 17:44:08.211871   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:08.211926   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:08.215306   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:08.218688   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:08.218753   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:08.245955   51251 cri.go:89] found id: ""
	I1018 17:44:08.245978   51251 logs.go:282] 0 containers: []
	W1018 17:44:08.245987   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:08.245994   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:08.246072   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:08.277970   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:08.277992   51251 cri.go:89] found id: ""
	I1018 17:44:08.278011   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:08.278083   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:08.281866   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:08.281956   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:08.314813   51251 cri.go:89] found id: ""
	I1018 17:44:08.314835   51251 logs.go:282] 0 containers: []
	W1018 17:44:08.314844   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:08.314853   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:08.314888   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:08.326805   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:08.326836   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:08.360439   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:08.360467   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:08.388919   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:08.388973   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:08.486321   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:08.486351   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:08.552337   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:08.544684    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.545314    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.546893    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.547374    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.548846    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:08.544684    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.545314    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.546893    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.547374    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.548846    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:08.552356   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:08.552369   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:08.577416   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:08.577441   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:08.629938   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:08.629973   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:08.689554   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:08.689585   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:08.719107   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:08.719132   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:08.799512   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:08.799588   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:11.341509   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:11.352018   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:11.352091   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:11.378915   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:11.378937   51251 cri.go:89] found id: ""
	I1018 17:44:11.378946   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:11.379001   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:11.382407   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:11.382471   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:11.407787   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:11.407806   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:11.407811   51251 cri.go:89] found id: ""
	I1018 17:44:11.407818   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:11.407902   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:11.411921   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:11.415171   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:11.415239   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:11.440964   51251 cri.go:89] found id: ""
	I1018 17:44:11.440986   51251 logs.go:282] 0 containers: []
	W1018 17:44:11.440995   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:11.441001   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:11.441056   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:11.470489   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:11.470512   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:11.470516   51251 cri.go:89] found id: ""
	I1018 17:44:11.470523   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:11.470579   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:11.474310   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:11.477884   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:11.477960   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:11.504799   51251 cri.go:89] found id: ""
	I1018 17:44:11.504862   51251 logs.go:282] 0 containers: []
	W1018 17:44:11.504885   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:11.504906   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:11.505006   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:11.533920   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:11.533983   51251 cri.go:89] found id: ""
	I1018 17:44:11.534003   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:11.534091   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:11.537702   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:11.537789   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:11.564923   51251 cri.go:89] found id: ""
	I1018 17:44:11.565058   51251 logs.go:282] 0 containers: []
	W1018 17:44:11.565068   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:11.565077   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:11.565089   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:11.576916   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:11.577027   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:11.644089   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:11.636599    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.637224    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.638751    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.639193    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.640642    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:11.636599    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.637224    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.638751    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.639193    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.640642    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:11.644109   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:11.644123   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:11.698636   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:11.698669   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:11.760923   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:11.760958   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:11.787821   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:11.787851   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:11.820451   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:11.820482   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:11.851416   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:11.851442   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:11.946634   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:11.946674   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:11.975802   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:11.975830   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:12.010031   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:12.010112   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:14.600286   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:14.611078   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:14.611145   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:14.638095   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:14.638116   51251 cri.go:89] found id: ""
	I1018 17:44:14.638124   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:14.638205   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:14.641787   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:14.641856   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:14.668881   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:14.668904   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:14.668910   51251 cri.go:89] found id: ""
	I1018 17:44:14.668918   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:14.669001   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:14.672474   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:14.675764   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:14.675840   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:14.699628   51251 cri.go:89] found id: ""
	I1018 17:44:14.699652   51251 logs.go:282] 0 containers: []
	W1018 17:44:14.699660   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:14.699666   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:14.699723   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:14.724155   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:14.724177   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:14.724182   51251 cri.go:89] found id: ""
	I1018 17:44:14.724190   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:14.724260   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:14.728073   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:14.731467   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:14.731534   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:14.757304   51251 cri.go:89] found id: ""
	I1018 17:44:14.757327   51251 logs.go:282] 0 containers: []
	W1018 17:44:14.757354   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:14.757361   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:14.757420   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:14.784778   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:14.784799   51251 cri.go:89] found id: ""
	I1018 17:44:14.784808   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:14.784862   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:14.788408   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:14.788477   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:14.819756   51251 cri.go:89] found id: ""
	I1018 17:44:14.819778   51251 logs.go:282] 0 containers: []
	W1018 17:44:14.819796   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:14.819805   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:14.819816   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:14.844668   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:14.844698   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:14.876534   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:14.876564   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:14.980256   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:14.980340   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:15.044346   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:15.044386   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:15.121677   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:15.121713   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:15.203393   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:15.203428   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:15.219368   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:15.219394   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:15.296726   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:15.289112    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.289522    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.291014    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.291333    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.292981    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:15.289112    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.289522    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.291014    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.291333    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.292981    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:15.296748   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:15.296761   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:15.322490   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:15.322516   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:15.364728   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:15.364760   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:17.892524   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:17.903413   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:17.903482   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:17.931967   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:17.931989   51251 cri.go:89] found id: ""
	I1018 17:44:17.931997   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:17.932052   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:17.935895   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:17.936007   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:17.983924   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:17.983945   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:17.983950   51251 cri.go:89] found id: ""
	I1018 17:44:17.983958   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:17.984014   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:17.987660   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:17.991127   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:17.991201   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:18.022803   51251 cri.go:89] found id: ""
	I1018 17:44:18.022827   51251 logs.go:282] 0 containers: []
	W1018 17:44:18.022836   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:18.022843   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:18.022906   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:18.064735   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:18.064754   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:18.064759   51251 cri.go:89] found id: ""
	I1018 17:44:18.064767   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:18.064823   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:18.068536   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:18.072878   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:18.072982   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:18.100206   51251 cri.go:89] found id: ""
	I1018 17:44:18.100237   51251 logs.go:282] 0 containers: []
	W1018 17:44:18.100246   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:18.100253   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:18.100321   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:18.127552   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:18.127575   51251 cri.go:89] found id: ""
	I1018 17:44:18.127584   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:18.127641   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:18.131667   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:18.131732   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:18.162707   51251 cri.go:89] found id: ""
	I1018 17:44:18.162731   51251 logs.go:282] 0 containers: []
	W1018 17:44:18.162739   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:18.162748   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:18.162763   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:18.246228   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:18.238684    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.239276    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.240721    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.241146    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.242608    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:18.238684    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.239276    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.240721    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.241146    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.242608    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:18.246250   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:18.246263   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:18.277740   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:18.277764   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:18.343394   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:18.343427   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:18.383823   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:18.383854   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:18.443389   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:18.443420   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:18.469522   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:18.469550   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:18.545455   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:18.545487   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:18.592352   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:18.592376   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:18.695698   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:18.695735   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:18.707163   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:18.707192   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:21.235420   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:21.245952   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:21.246019   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:21.271930   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:21.271997   51251 cri.go:89] found id: ""
	I1018 17:44:21.272019   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:21.272106   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:21.275968   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:21.276036   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:21.302979   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:21.302997   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:21.303001   51251 cri.go:89] found id: ""
	I1018 17:44:21.303008   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:21.303069   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:21.307879   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:21.311562   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:21.311627   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:21.339660   51251 cri.go:89] found id: ""
	I1018 17:44:21.339681   51251 logs.go:282] 0 containers: []
	W1018 17:44:21.339690   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:21.339695   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:21.339752   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:21.368389   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:21.368411   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:21.368416   51251 cri.go:89] found id: ""
	I1018 17:44:21.368424   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:21.368478   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:21.372383   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:21.375709   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:21.375779   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:21.401944   51251 cri.go:89] found id: ""
	I1018 17:44:21.402017   51251 logs.go:282] 0 containers: []
	W1018 17:44:21.402040   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:21.402058   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:21.402140   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:21.428284   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:21.428303   51251 cri.go:89] found id: ""
	I1018 17:44:21.428312   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:21.428392   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:21.432085   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:21.432163   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:21.456804   51251 cri.go:89] found id: ""
	I1018 17:44:21.456878   51251 logs.go:282] 0 containers: []
	W1018 17:44:21.456899   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:21.456922   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:21.456987   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:21.530466   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:21.522476    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.523226    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.524791    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.525409    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.526934    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:21.522476    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.523226    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.524791    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.525409    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.526934    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:21.530487   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:21.530500   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:21.583954   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:21.583988   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:21.624634   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:21.624667   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:21.683522   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:21.683555   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:21.712030   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:21.712058   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:21.743203   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:21.743227   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:21.823114   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:21.823149   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:21.854521   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:21.854548   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:21.957239   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:21.957276   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:21.974988   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:21.975013   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:24.514740   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:24.525668   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:24.525738   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:24.553057   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:24.553087   51251 cri.go:89] found id: ""
	I1018 17:44:24.553096   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:24.553152   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:24.556981   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:24.557053   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:24.583773   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:24.583796   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:24.583801   51251 cri.go:89] found id: ""
	I1018 17:44:24.583809   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:24.583864   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:24.587649   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:24.591283   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:24.591388   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:24.617918   51251 cri.go:89] found id: ""
	I1018 17:44:24.617940   51251 logs.go:282] 0 containers: []
	W1018 17:44:24.617949   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:24.617959   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:24.618025   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:24.643293   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:24.643319   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:24.643323   51251 cri.go:89] found id: ""
	I1018 17:44:24.643331   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:24.643391   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:24.647045   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:24.650422   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:24.650491   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:24.676556   51251 cri.go:89] found id: ""
	I1018 17:44:24.676629   51251 logs.go:282] 0 containers: []
	W1018 17:44:24.676652   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:24.676670   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:24.676753   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:24.703335   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:24.703354   51251 cri.go:89] found id: ""
	I1018 17:44:24.703362   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:24.703413   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:24.707043   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:24.707112   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:24.736770   51251 cri.go:89] found id: ""
	I1018 17:44:24.736793   51251 logs.go:282] 0 containers: []
	W1018 17:44:24.736802   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:24.736811   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:24.736821   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:24.831690   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:24.831725   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:24.845067   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:24.845094   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:24.915666   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:24.907247    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.907870    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.909378    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.910211    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.911689    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:24.907247    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.907870    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.909378    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.910211    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.911689    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:24.915715   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:24.915728   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:24.980758   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:24.980794   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:25.013913   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:25.013944   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:25.095710   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:25.095746   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:25.136366   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:25.136395   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:25.167081   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:25.167108   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:25.217068   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:25.217106   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:25.250444   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:25.250477   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:27.778976   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:27.789442   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:27.789511   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:27.816188   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:27.816211   51251 cri.go:89] found id: ""
	I1018 17:44:27.816219   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:27.816273   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:27.819794   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:27.819867   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:27.846400   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:27.846433   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:27.846439   51251 cri.go:89] found id: ""
	I1018 17:44:27.846461   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:27.846546   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:27.850346   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:27.853879   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:27.853956   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:27.880448   51251 cri.go:89] found id: ""
	I1018 17:44:27.880471   51251 logs.go:282] 0 containers: []
	W1018 17:44:27.880480   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:27.880486   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:27.880549   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:27.908354   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:27.908384   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:27.908389   51251 cri.go:89] found id: ""
	I1018 17:44:27.908397   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:27.908454   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:27.913635   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:27.917518   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:27.917589   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:27.944652   51251 cri.go:89] found id: ""
	I1018 17:44:27.944674   51251 logs.go:282] 0 containers: []
	W1018 17:44:27.944683   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:27.944689   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:27.944749   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:27.978127   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:27.978150   51251 cri.go:89] found id: ""
	I1018 17:44:27.978158   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:27.978217   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:27.982028   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:27.982097   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:28.010364   51251 cri.go:89] found id: ""
	I1018 17:44:28.010395   51251 logs.go:282] 0 containers: []
	W1018 17:44:28.010405   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:28.010414   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:28.010426   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:28.113197   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:28.113275   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:28.143438   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:28.143464   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:28.193919   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:28.193956   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:28.233324   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:28.233364   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:28.315086   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:28.315121   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:28.327446   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:28.327472   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:28.403227   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:28.392160    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.393002    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.395106    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.395823    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.397363    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:28.392160    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.393002    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.395106    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.395823    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.397363    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:28.403250   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:28.403262   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:28.467992   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:28.468024   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:28.495923   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:28.495947   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:28.526646   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:28.526674   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:31.058337   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:31.069976   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:31.070050   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:31.101306   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:31.101328   51251 cri.go:89] found id: ""
	I1018 17:44:31.101336   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:31.101399   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:31.105055   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:31.105128   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:31.142563   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:31.142588   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:31.142593   51251 cri.go:89] found id: ""
	I1018 17:44:31.142600   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:31.142662   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:31.146604   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:31.150365   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:31.150435   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:31.176760   51251 cri.go:89] found id: ""
	I1018 17:44:31.176785   51251 logs.go:282] 0 containers: []
	W1018 17:44:31.176793   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:31.176800   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:31.176894   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:31.209000   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:31.209022   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:31.209027   51251 cri.go:89] found id: ""
	I1018 17:44:31.209034   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:31.209092   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:31.213702   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:31.217030   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:31.217134   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:31.244577   51251 cri.go:89] found id: ""
	I1018 17:44:31.244600   51251 logs.go:282] 0 containers: []
	W1018 17:44:31.244608   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:31.244615   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:31.244694   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:31.276009   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:31.276030   51251 cri.go:89] found id: ""
	I1018 17:44:31.276037   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:31.276126   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:31.279948   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:31.280039   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:31.312074   51251 cri.go:89] found id: ""
	I1018 17:44:31.312098   51251 logs.go:282] 0 containers: []
	W1018 17:44:31.312108   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:31.312117   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:31.312146   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:31.374723   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:31.374758   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:31.402419   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:31.402446   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:31.430538   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:31.430564   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:31.512803   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:31.512837   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:31.614079   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:31.614114   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:31.681910   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:31.673049    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.673806    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.675573    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.676196    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.677982    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:31.673049    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.673806    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.675573    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.676196    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.677982    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:31.681935   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:31.681956   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:31.707698   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:31.707730   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:31.744929   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:31.745030   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:31.776082   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:31.776119   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:31.788990   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:31.789026   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:34.355514   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:34.366625   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:34.366689   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:34.394220   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:34.394241   51251 cri.go:89] found id: ""
	I1018 17:44:34.394249   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:34.394307   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:34.398229   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:34.398301   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:34.428966   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:34.428987   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:34.428991   51251 cri.go:89] found id: ""
	I1018 17:44:34.428999   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:34.429056   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:34.438000   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:34.443562   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:34.443638   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:34.470520   51251 cri.go:89] found id: ""
	I1018 17:44:34.470583   51251 logs.go:282] 0 containers: []
	W1018 17:44:34.470596   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:34.470603   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:34.470660   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:34.498015   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:34.498035   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:34.498040   51251 cri.go:89] found id: ""
	I1018 17:44:34.498047   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:34.498107   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:34.501820   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:34.505392   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:34.505508   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:34.531261   51251 cri.go:89] found id: ""
	I1018 17:44:34.531285   51251 logs.go:282] 0 containers: []
	W1018 17:44:34.531294   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:34.531301   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:34.531391   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:34.558417   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:34.558439   51251 cri.go:89] found id: ""
	I1018 17:44:34.558448   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:34.558506   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:34.562283   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:34.562397   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:34.589239   51251 cri.go:89] found id: ""
	I1018 17:44:34.589263   51251 logs.go:282] 0 containers: []
	W1018 17:44:34.589271   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:34.589280   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:34.589321   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:34.639508   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:34.639543   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:34.704073   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:34.704111   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:34.730079   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:34.730105   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:34.812757   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:34.812794   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:34.844323   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:34.844351   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:34.870994   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:34.871020   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:34.909712   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:34.909738   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:34.949435   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:34.949461   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:35.051363   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:35.051403   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:35.064297   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:35.064324   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:35.143040   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:35.134155    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.134888    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.136750    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.137513    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.139182    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:35.134155    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.134888    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.136750    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.137513    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.139182    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:37.644402   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:37.655473   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:37.655556   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:37.686712   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:37.686743   51251 cri.go:89] found id: ""
	I1018 17:44:37.686753   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:37.686818   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:37.690705   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:37.690780   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:37.717269   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:37.717288   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:37.717293   51251 cri.go:89] found id: ""
	I1018 17:44:37.717300   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:37.717365   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:37.721019   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:37.724434   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:37.724511   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:37.751507   51251 cri.go:89] found id: ""
	I1018 17:44:37.751529   51251 logs.go:282] 0 containers: []
	W1018 17:44:37.751548   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:37.751554   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:37.751612   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:37.780532   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:37.780550   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:37.780555   51251 cri.go:89] found id: ""
	I1018 17:44:37.780562   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:37.780620   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:37.784463   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:37.789038   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:37.789127   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:37.827207   51251 cri.go:89] found id: ""
	I1018 17:44:37.827234   51251 logs.go:282] 0 containers: []
	W1018 17:44:37.827243   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:37.827250   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:37.827328   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:37.854900   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:37.854962   51251 cri.go:89] found id: ""
	I1018 17:44:37.854986   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:37.855062   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:37.859902   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:37.859977   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:37.886300   51251 cri.go:89] found id: ""
	I1018 17:44:37.886365   51251 logs.go:282] 0 containers: []
	W1018 17:44:37.886388   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:37.886409   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:37.886446   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:37.984179   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:37.984212   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:38.054964   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:38.045702    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.046390    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.048099    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.048652    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.050343    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:38.045702    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.046390    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.048099    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.048652    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.050343    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:38.054994   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:38.055010   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:38.084660   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:38.084691   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:38.124518   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:38.124606   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:38.190852   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:38.190893   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:38.273991   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:38.274027   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:38.286517   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:38.286546   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:38.338543   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:38.338580   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:38.367716   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:38.367745   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:38.401155   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:38.401184   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:40.943389   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:40.954255   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:40.954330   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:40.990505   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:40.990526   51251 cri.go:89] found id: ""
	I1018 17:44:40.990535   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:40.990591   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:40.994301   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:40.994374   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:41.024101   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:41.024123   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:41.024128   51251 cri.go:89] found id: ""
	I1018 17:44:41.024135   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:41.024202   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:41.028135   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:41.031764   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:41.031846   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:41.058027   51251 cri.go:89] found id: ""
	I1018 17:44:41.058110   51251 logs.go:282] 0 containers: []
	W1018 17:44:41.058133   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:41.058154   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:41.058241   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:41.084363   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:41.084429   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:41.084447   51251 cri.go:89] found id: ""
	I1018 17:44:41.084468   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:41.084549   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:41.088275   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:41.091806   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:41.091872   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:41.119266   51251 cri.go:89] found id: ""
	I1018 17:44:41.119288   51251 logs.go:282] 0 containers: []
	W1018 17:44:41.119296   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:41.119302   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:41.119364   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:41.152142   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:41.152162   51251 cri.go:89] found id: ""
	I1018 17:44:41.152171   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:41.152233   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:41.155967   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:41.156039   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:41.183430   51251 cri.go:89] found id: ""
	I1018 17:44:41.183453   51251 logs.go:282] 0 containers: []
	W1018 17:44:41.183461   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:41.183470   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:41.183481   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:41.217575   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:41.217599   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:41.314633   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:41.314667   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:41.383386   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:41.373451    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.374006    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.375984    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.377691    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.379407    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:41.373451    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.374006    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.375984    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.377691    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.379407    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:41.383406   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:41.383419   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:41.446018   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:41.446089   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:41.488303   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:41.488335   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:41.520983   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:41.521012   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:41.604693   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:41.604726   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:41.638240   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:41.638266   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:41.649462   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:41.649486   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:41.674875   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:41.674902   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:44.238248   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:44.255175   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:44.255240   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:44.287509   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:44.287527   51251 cri.go:89] found id: ""
	I1018 17:44:44.287535   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:44.287592   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.292053   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:44.292125   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:44.323105   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:44.323123   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:44.323128   51251 cri.go:89] found id: ""
	I1018 17:44:44.323135   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:44.323191   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.327287   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.331002   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:44.331110   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:44.362329   51251 cri.go:89] found id: ""
	I1018 17:44:44.362393   51251 logs.go:282] 0 containers: []
	W1018 17:44:44.362415   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:44.362436   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:44.362517   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:44.393314   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:44.393384   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:44.393403   51251 cri.go:89] found id: ""
	I1018 17:44:44.393432   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:44.393510   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.397610   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.401568   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:44.401674   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:44.439288   51251 cri.go:89] found id: ""
	I1018 17:44:44.439350   51251 logs.go:282] 0 containers: []
	W1018 17:44:44.439370   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:44.439391   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:44.439473   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:44.477857   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:44.477920   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:44.477939   51251 cri.go:89] found id: ""
	I1018 17:44:44.477960   51251 logs.go:282] 2 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:44.478038   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.482903   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.487434   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:44.487551   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:44.527686   51251 cri.go:89] found id: ""
	I1018 17:44:44.527761   51251 logs.go:282] 0 containers: []
	W1018 17:44:44.527784   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:44.527823   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:44.527850   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:44.637841   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:44.637917   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:44.653818   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:44.653846   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:44.762008   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:44.751907    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.753161    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.755038    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.755967    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.757158    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:44.751907    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.753161    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.755038    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.755967    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.757158    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:44.762038   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:44.762067   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:44.798868   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:44.798900   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:44.850591   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:44.850634   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:44.938420   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:44:44.938472   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:44.980294   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:44.980372   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:45.089048   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:45.089096   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:45.196420   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:45.196522   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:45.246623   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:45.246803   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:45.295911   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:45.295955   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:47.851142   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:47.862455   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:47.862520   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:47.888902   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:47.888970   51251 cri.go:89] found id: ""
	I1018 17:44:47.888984   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:47.889042   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:47.893115   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:47.893208   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:47.923068   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:47.923087   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:47.923091   51251 cri.go:89] found id: ""
	I1018 17:44:47.923099   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:47.923170   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:47.927351   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:47.931468   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:47.931541   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:47.958620   51251 cri.go:89] found id: ""
	I1018 17:44:47.958642   51251 logs.go:282] 0 containers: []
	W1018 17:44:47.958651   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:47.958657   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:47.958717   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:47.988421   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:47.988494   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:47.988514   51251 cri.go:89] found id: ""
	I1018 17:44:47.988534   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:47.988616   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:47.992743   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:47.996667   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:47.996742   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:48.025533   51251 cri.go:89] found id: ""
	I1018 17:44:48.025560   51251 logs.go:282] 0 containers: []
	W1018 17:44:48.025568   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:48.025575   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:48.025654   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:48.053974   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:48.053997   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:48.054002   51251 cri.go:89] found id: ""
	I1018 17:44:48.054009   51251 logs.go:282] 2 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:48.054070   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:48.057945   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:48.061877   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:48.061953   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:48.090761   51251 cri.go:89] found id: ""
	I1018 17:44:48.090786   51251 logs.go:282] 0 containers: []
	W1018 17:44:48.090795   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:48.090805   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:48.090817   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:48.189723   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:48.189756   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:48.221709   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:48.221739   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:48.259440   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:48.259470   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:48.345516   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:48.345553   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:48.374446   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:48.374477   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:48.460806   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:48.460842   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:48.473713   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:48.473739   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:48.554183   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:48.545515    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.546813    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.547313    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.548898    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.549566    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:48.545515    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.546813    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.547313    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.548898    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.549566    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:48.554204   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:48.554217   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:48.609158   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:44:48.609190   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:48.636984   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:48.637062   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:48.664743   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:48.664822   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:51.198411   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:51.210016   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:51.210081   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:51.236981   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:51.237004   51251 cri.go:89] found id: ""
	I1018 17:44:51.237012   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:51.237077   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.240676   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:51.240750   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:51.269356   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:51.269382   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:51.269387   51251 cri.go:89] found id: ""
	I1018 17:44:51.269395   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:51.269453   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.273122   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.277060   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:51.277132   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:51.304766   51251 cri.go:89] found id: ""
	I1018 17:44:51.304790   51251 logs.go:282] 0 containers: []
	W1018 17:44:51.304799   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:51.304805   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:51.304865   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:51.332379   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:51.332401   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:51.332406   51251 cri.go:89] found id: ""
	I1018 17:44:51.332414   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:51.332474   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.336518   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.341898   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:51.341976   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:51.367678   51251 cri.go:89] found id: ""
	I1018 17:44:51.367708   51251 logs.go:282] 0 containers: []
	W1018 17:44:51.367726   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:51.367732   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:51.367796   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:51.394153   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:51.394175   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:51.394180   51251 cri.go:89] found id: ""
	I1018 17:44:51.394187   51251 logs.go:282] 2 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:51.394243   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.397993   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.401471   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:51.401578   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:51.428758   51251 cri.go:89] found id: ""
	I1018 17:44:51.428822   51251 logs.go:282] 0 containers: []
	W1018 17:44:51.428844   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:51.428870   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:51.428894   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:51.503688   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:51.495917    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.496423    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.498141    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.498547    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.500003    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:51.495917    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.496423    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.498141    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.498547    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.500003    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:51.503709   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:51.503722   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:51.532853   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:51.532878   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:51.596823   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:44:51.596858   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:51.623499   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:51.623527   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:51.653511   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:51.653538   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:51.743235   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:51.743280   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:51.775603   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:51.775632   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:51.875854   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:51.875890   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:51.893446   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:51.893471   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:51.928284   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:51.928316   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:51.997158   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:51.997193   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:54.531254   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:54.544073   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:54.544143   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:54.572505   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:54.572526   51251 cri.go:89] found id: ""
	I1018 17:44:54.572534   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:54.572589   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.576276   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:54.576349   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:54.608530   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:54.608552   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:54.608557   51251 cri.go:89] found id: ""
	I1018 17:44:54.608564   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:54.608620   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.612802   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.616507   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:54.616574   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:54.646887   51251 cri.go:89] found id: ""
	I1018 17:44:54.646909   51251 logs.go:282] 0 containers: []
	W1018 17:44:54.646918   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:54.646924   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:54.646985   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:54.673624   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:54.673641   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:54.673646   51251 cri.go:89] found id: ""
	I1018 17:44:54.673653   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:54.673708   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.677580   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.680915   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:54.681039   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:54.707856   51251 cri.go:89] found id: ""
	I1018 17:44:54.707882   51251 logs.go:282] 0 containers: []
	W1018 17:44:54.707890   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:54.707897   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:54.707985   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:54.739572   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:54.739596   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:54.739602   51251 cri.go:89] found id: ""
	I1018 17:44:54.739609   51251 logs.go:282] 2 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:54.739666   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.744278   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.747740   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:54.747812   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:54.786379   51251 cri.go:89] found id: ""
	I1018 17:44:54.786405   51251 logs.go:282] 0 containers: []
	W1018 17:44:54.786413   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:54.786423   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:54.786435   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:54.850541   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:54.850577   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:54.878112   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:54.878139   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:54.905434   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:54.905462   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:54.983610   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:54.974914    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.975800    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.977585    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.978207    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.979920    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:54.974914    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.975800    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.977585    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.978207    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.979920    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:54.983631   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:44:54.983643   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:55.018119   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:55.018148   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:55.096411   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:55.096446   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:55.134900   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:55.134926   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:55.237181   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:55.237214   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:55.250828   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:55.250858   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:55.281899   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:55.281928   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:55.339174   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:55.339208   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:57.880428   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:57.891159   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:57.891231   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:57.921966   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:57.921988   51251 cri.go:89] found id: ""
	I1018 17:44:57.921996   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:57.922051   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:57.925877   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:57.925946   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:57.983701   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:57.983719   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:57.983724   51251 cri.go:89] found id: ""
	I1018 17:44:57.983731   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:57.983785   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:57.988147   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:57.991948   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:57.992055   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:58.027455   51251 cri.go:89] found id: ""
	I1018 17:44:58.027489   51251 logs.go:282] 0 containers: []
	W1018 17:44:58.027498   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:58.027504   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:58.027572   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:58.061874   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:58.061896   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:58.061902   51251 cri.go:89] found id: ""
	I1018 17:44:58.061911   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:58.061971   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:58.065752   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:58.069525   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:58.069600   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:58.099676   51251 cri.go:89] found id: ""
	I1018 17:44:58.099698   51251 logs.go:282] 0 containers: []
	W1018 17:44:58.099707   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:58.099720   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:58.099778   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:58.132718   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:58.132740   51251 cri.go:89] found id: ""
	I1018 17:44:58.132748   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:44:58.132803   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:58.136641   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:58.136718   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:58.161767   51251 cri.go:89] found id: ""
	I1018 17:44:58.161791   51251 logs.go:282] 0 containers: []
	W1018 17:44:58.161799   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:58.161808   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:58.161820   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:58.239848   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:58.231755    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.232488    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.234323    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.234970    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.236249    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:58.231755    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.232488    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.234323    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.234970    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.236249    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:58.239867   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:58.239879   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:58.265229   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:58.265253   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:58.316459   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:58.316495   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:58.382736   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:58.382771   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:58.461400   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:58.461435   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:58.496880   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:58.496905   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:58.600326   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:58.600360   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:58.612833   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:58.612860   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:58.652792   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:58.652823   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:58.683598   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:44:58.683624   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:01.209276   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:01.221741   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:01.221825   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:01.255998   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:01.256020   51251 cri.go:89] found id: ""
	I1018 17:45:01.256029   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:01.256090   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:01.260323   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:01.260410   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:01.290623   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:01.290646   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:01.290652   51251 cri.go:89] found id: ""
	I1018 17:45:01.290660   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:01.290722   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:01.294923   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:01.299340   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:01.299421   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:01.328205   51251 cri.go:89] found id: ""
	I1018 17:45:01.328234   51251 logs.go:282] 0 containers: []
	W1018 17:45:01.328244   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:01.328251   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:01.328321   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:01.360099   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:01.360123   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:01.360128   51251 cri.go:89] found id: ""
	I1018 17:45:01.360136   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:01.360209   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:01.364283   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:01.368572   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:01.368657   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:01.397092   51251 cri.go:89] found id: ""
	I1018 17:45:01.397161   51251 logs.go:282] 0 containers: []
	W1018 17:45:01.397184   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:01.397207   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:01.397297   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:01.426452   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:01.426520   51251 cri.go:89] found id: ""
	I1018 17:45:01.426537   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:01.426623   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:01.430959   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:01.431090   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:01.460044   51251 cri.go:89] found id: ""
	I1018 17:45:01.460085   51251 logs.go:282] 0 containers: []
	W1018 17:45:01.460095   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:01.460126   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:01.460171   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:01.536047   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:01.536083   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:01.548838   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:01.548870   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:01.581436   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:01.581464   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:01.639347   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:01.639384   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:01.667540   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:01.667571   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:01.714304   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:01.714330   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:01.813430   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:01.813510   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:01.882898   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:01.873459    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.874354    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.876306    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.877166    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.878779    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:01.873459    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.874354    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.876306    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.877166    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.878779    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:01.882921   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:01.882937   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:01.917303   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:01.917407   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:01.999403   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:01.999445   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:04.533522   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:04.544111   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:04.544187   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:04.570770   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:04.570840   51251 cri.go:89] found id: ""
	I1018 17:45:04.570855   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:04.570912   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:04.575103   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:04.575198   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:04.609501   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:04.609532   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:04.609537   51251 cri.go:89] found id: ""
	I1018 17:45:04.609545   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:04.609600   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:04.613955   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:04.617439   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:04.617516   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:04.645280   51251 cri.go:89] found id: ""
	I1018 17:45:04.645306   51251 logs.go:282] 0 containers: []
	W1018 17:45:04.645315   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:04.645324   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:04.645392   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:04.672130   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:04.672153   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:04.672158   51251 cri.go:89] found id: ""
	I1018 17:45:04.672167   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:04.672223   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:04.676297   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:04.681021   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:04.681099   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:04.707420   51251 cri.go:89] found id: ""
	I1018 17:45:04.707444   51251 logs.go:282] 0 containers: []
	W1018 17:45:04.707452   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:04.707461   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:04.707517   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:04.737533   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:04.737555   51251 cri.go:89] found id: ""
	I1018 17:45:04.737565   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:04.737631   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:04.741271   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:04.741342   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:04.767657   51251 cri.go:89] found id: ""
	I1018 17:45:04.767681   51251 logs.go:282] 0 containers: []
	W1018 17:45:04.767689   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:04.767699   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:04.767710   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:04.863553   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:04.863587   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:04.875569   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:04.875600   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:04.930436   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:04.930476   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:04.969240   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:04.969276   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:05.039302   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:05.039336   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:05.067077   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:05.067103   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:05.148387   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:05.148422   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:05.223337   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:05.215470    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.216065    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.217641    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.218213    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.219737    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:05.215470    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.216065    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.217641    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.218213    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.219737    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:05.223369   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:05.223382   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:05.249066   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:05.249091   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:05.280440   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:05.280465   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:07.817192   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:07.827427   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:07.827497   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:07.853178   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:07.853198   51251 cri.go:89] found id: ""
	I1018 17:45:07.853206   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:07.853261   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:07.857004   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:07.857072   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:07.882619   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:07.882640   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:07.882645   51251 cri.go:89] found id: ""
	I1018 17:45:07.882652   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:07.882716   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:07.886518   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:07.890146   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:07.890220   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:07.917313   51251 cri.go:89] found id: ""
	I1018 17:45:07.917338   51251 logs.go:282] 0 containers: []
	W1018 17:45:07.917351   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:07.917358   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:07.917421   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:07.950191   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:07.950218   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:07.950223   51251 cri.go:89] found id: ""
	I1018 17:45:07.950234   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:07.950304   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:07.953933   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:07.957694   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:07.957770   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:07.990144   51251 cri.go:89] found id: ""
	I1018 17:45:07.990167   51251 logs.go:282] 0 containers: []
	W1018 17:45:07.990176   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:07.990183   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:07.990240   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:08.023638   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:08.023660   51251 cri.go:89] found id: ""
	I1018 17:45:08.023669   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:08.023729   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:08.028231   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:08.028307   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:08.056653   51251 cri.go:89] found id: ""
	I1018 17:45:08.056678   51251 logs.go:282] 0 containers: []
	W1018 17:45:08.056687   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:08.056696   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:08.056708   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:08.132641   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:08.122188    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.122913    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.124506    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.124806    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.126307    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:08.122188    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.122913    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.124506    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.124806    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.126307    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:08.132662   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:08.132677   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:08.197105   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:08.197143   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:08.238131   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:08.238157   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:08.266672   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:08.266701   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:08.302562   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:08.302587   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:08.411059   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:08.411103   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:08.423232   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:08.423261   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:08.449524   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:08.449549   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:08.505779   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:08.505811   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:08.540674   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:08.540708   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:11.118218   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:11.130399   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:11.130521   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:11.164618   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:11.164637   51251 cri.go:89] found id: ""
	I1018 17:45:11.164644   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:11.164700   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:11.168380   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:11.168453   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:11.195034   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:11.195059   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:11.195065   51251 cri.go:89] found id: ""
	I1018 17:45:11.195072   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:11.195126   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:11.199134   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:11.203492   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:11.203557   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:11.230659   51251 cri.go:89] found id: ""
	I1018 17:45:11.230681   51251 logs.go:282] 0 containers: []
	W1018 17:45:11.230689   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:11.230697   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:11.230773   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:11.256814   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:11.256842   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:11.256847   51251 cri.go:89] found id: ""
	I1018 17:45:11.256855   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:11.256973   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:11.260554   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:11.263940   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:11.264009   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:11.289036   51251 cri.go:89] found id: ""
	I1018 17:45:11.289114   51251 logs.go:282] 0 containers: []
	W1018 17:45:11.289128   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:11.289134   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:11.289192   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:11.320844   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:11.320867   51251 cri.go:89] found id: ""
	I1018 17:45:11.320875   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:11.320928   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:11.324471   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:11.324537   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:11.350002   51251 cri.go:89] found id: ""
	I1018 17:45:11.350028   51251 logs.go:282] 0 containers: []
	W1018 17:45:11.350036   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:11.350045   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:11.350057   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:11.415699   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:11.407276    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.408085    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.409925    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.410627    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.412208    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:11.407276    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.408085    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.409925    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.410627    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.412208    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:11.415719   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:11.415732   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:11.467144   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:11.467178   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:11.500116   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:11.500149   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:11.565053   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:11.565083   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:11.594806   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:11.594833   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:11.621385   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:11.621416   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:11.649391   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:11.649418   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:11.681270   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:11.681294   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:11.758017   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:11.758049   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:11.856363   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:11.856394   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:14.369690   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:14.380482   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:14.380582   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:14.406908   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:14.406929   51251 cri.go:89] found id: ""
	I1018 17:45:14.406937   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:14.406991   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:14.410922   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:14.410995   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:14.438715   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:14.438787   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:14.438805   51251 cri.go:89] found id: ""
	I1018 17:45:14.438825   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:14.438910   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:14.442634   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:14.446455   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:14.446583   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:14.472662   51251 cri.go:89] found id: ""
	I1018 17:45:14.472729   51251 logs.go:282] 0 containers: []
	W1018 17:45:14.472740   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:14.472749   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:14.472837   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:14.499722   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:14.499787   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:14.499804   51251 cri.go:89] found id: ""
	I1018 17:45:14.499826   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:14.499910   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:14.503638   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:14.507247   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:14.507364   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:14.534947   51251 cri.go:89] found id: ""
	I1018 17:45:14.534973   51251 logs.go:282] 0 containers: []
	W1018 17:45:14.534981   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:14.534987   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:14.535064   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:14.561664   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:14.561686   51251 cri.go:89] found id: ""
	I1018 17:45:14.561695   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:14.561753   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:14.565710   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:14.565806   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:14.595947   51251 cri.go:89] found id: ""
	I1018 17:45:14.595972   51251 logs.go:282] 0 containers: []
	W1018 17:45:14.595980   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:14.595990   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:14.596029   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:14.671772   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:14.671807   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:14.775531   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:14.775566   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:14.787782   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:14.787811   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:14.819786   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:14.819816   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:14.851924   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:14.851951   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:14.920046   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:14.911958    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.912762    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.914424    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.914744    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.916204    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:14.911958    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.912762    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.914424    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.914744    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.916204    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:14.920119   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:14.920139   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:14.977739   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:14.977775   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:15.032058   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:15.032091   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:15.102494   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:15.102529   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:15.138731   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:15.138757   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:17.666030   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:17.676690   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:17.676760   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:17.703559   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:17.703578   51251 cri.go:89] found id: ""
	I1018 17:45:17.703585   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:17.703638   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:17.707859   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:17.707930   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:17.735399   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:17.735422   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:17.735433   51251 cri.go:89] found id: ""
	I1018 17:45:17.735441   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:17.735498   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:17.739407   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:17.742711   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:17.742782   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:17.773860   51251 cri.go:89] found id: ""
	I1018 17:45:17.773930   51251 logs.go:282] 0 containers: []
	W1018 17:45:17.773946   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:17.773953   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:17.774014   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:17.800989   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:17.801015   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:17.801021   51251 cri.go:89] found id: ""
	I1018 17:45:17.801028   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:17.801094   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:17.805064   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:17.808714   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:17.808845   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:17.835041   51251 cri.go:89] found id: ""
	I1018 17:45:17.835065   51251 logs.go:282] 0 containers: []
	W1018 17:45:17.835073   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:17.835080   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:17.835141   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:17.866314   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:17.866337   51251 cri.go:89] found id: ""
	I1018 17:45:17.866345   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:17.866406   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:17.870038   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:17.870110   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:17.895894   51251 cri.go:89] found id: ""
	I1018 17:45:17.895916   51251 logs.go:282] 0 containers: []
	W1018 17:45:17.895925   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:17.895934   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:17.895945   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:17.998692   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:17.998766   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:18.015153   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:18.015182   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:18.068223   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:18.068259   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:18.154314   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:18.154356   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:18.243477   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:18.234737    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.235447    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.237270    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.237840    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.239403    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:18.234737    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.235447    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.237270    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.237840    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.239403    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:18.243497   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:18.243509   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:18.275940   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:18.275970   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:18.316930   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:18.316995   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:18.389081   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:18.389116   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:18.418930   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:18.418956   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:18.449161   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:18.449188   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:20.980259   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:20.991356   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:20.991427   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:21.028373   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:21.028396   51251 cri.go:89] found id: ""
	I1018 17:45:21.028404   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:21.028462   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:21.031989   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:21.032060   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:21.061105   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:21.061126   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:21.061138   51251 cri.go:89] found id: ""
	I1018 17:45:21.061147   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:21.061206   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:21.064983   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:21.068555   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:21.068622   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:21.095318   51251 cri.go:89] found id: ""
	I1018 17:45:21.095340   51251 logs.go:282] 0 containers: []
	W1018 17:45:21.095348   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:21.095354   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:21.095410   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:21.132132   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:21.132167   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:21.132172   51251 cri.go:89] found id: ""
	I1018 17:45:21.132195   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:21.132278   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:21.136778   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:21.140214   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:21.140288   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:21.172583   51251 cri.go:89] found id: ""
	I1018 17:45:21.172605   51251 logs.go:282] 0 containers: []
	W1018 17:45:21.172614   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:21.172620   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:21.172675   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:21.203092   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:21.203113   51251 cri.go:89] found id: ""
	I1018 17:45:21.203121   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:21.203176   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:21.207592   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:21.207657   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:21.235546   51251 cri.go:89] found id: ""
	I1018 17:45:21.235570   51251 logs.go:282] 0 containers: []
	W1018 17:45:21.235580   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:21.235589   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:21.235635   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:21.332614   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:21.332652   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:21.360929   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:21.361068   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:21.401211   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:21.401249   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:21.468558   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:21.468594   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:21.498171   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:21.498196   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:21.576112   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:21.576147   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:21.607742   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:21.607775   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:21.619918   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:21.619943   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:21.687350   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:21.679038    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.679743    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.681303    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.681885    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.683555    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:21.679038    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.679743    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.681303    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.681885    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.683555    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:21.687371   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:21.687384   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:21.742021   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:21.742057   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:24.270296   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:24.281336   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:24.281412   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:24.310155   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:24.310176   51251 cri.go:89] found id: ""
	I1018 17:45:24.310184   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:24.310236   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:24.314848   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:24.314949   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:24.343101   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:24.343140   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:24.343146   51251 cri.go:89] found id: ""
	I1018 17:45:24.343154   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:24.343214   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:24.347137   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:24.350301   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:24.350364   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:24.375739   51251 cri.go:89] found id: ""
	I1018 17:45:24.375763   51251 logs.go:282] 0 containers: []
	W1018 17:45:24.375774   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:24.375787   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:24.375845   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:24.414912   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:24.414933   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:24.414944   51251 cri.go:89] found id: ""
	I1018 17:45:24.414952   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:24.415006   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:24.419585   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:24.423104   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:24.423211   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:24.449615   51251 cri.go:89] found id: ""
	I1018 17:45:24.449639   51251 logs.go:282] 0 containers: []
	W1018 17:45:24.449647   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:24.449653   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:24.449709   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:24.476036   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:24.476057   51251 cri.go:89] found id: ""
	I1018 17:45:24.476065   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:24.476126   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:24.479757   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:24.479825   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:24.512386   51251 cri.go:89] found id: ""
	I1018 17:45:24.512409   51251 logs.go:282] 0 containers: []
	W1018 17:45:24.512417   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:24.512426   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:24.512438   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:24.538617   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:24.538645   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:24.592949   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:24.592984   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:24.621215   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:24.621242   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:24.697575   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:24.697611   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:24.769130   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:24.760873    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.761713    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.763257    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.763723    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.765324    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:24.760873    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.761713    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.763257    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.763723    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.765324    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:24.769206   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:24.769228   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:24.807477   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:24.807508   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:24.880464   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:24.880506   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:24.913114   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:24.913140   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:24.946306   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:24.946335   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:25.051970   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:25.052004   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:27.565286   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:27.576658   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:27.576726   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:27.613181   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:27.613202   51251 cri.go:89] found id: ""
	I1018 17:45:27.613210   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:27.613264   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:27.617394   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:27.617462   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:27.645391   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:27.645413   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:27.645418   51251 cri.go:89] found id: ""
	I1018 17:45:27.645426   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:27.645494   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:27.649249   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:27.652792   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:27.652866   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:27.679303   51251 cri.go:89] found id: ""
	I1018 17:45:27.679368   51251 logs.go:282] 0 containers: []
	W1018 17:45:27.679390   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:27.679408   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:27.679492   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:27.705387   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:27.705453   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:27.705466   51251 cri.go:89] found id: ""
	I1018 17:45:27.705475   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:27.705532   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:27.709305   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:27.713679   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:27.713761   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:27.740178   51251 cri.go:89] found id: ""
	I1018 17:45:27.740203   51251 logs.go:282] 0 containers: []
	W1018 17:45:27.740211   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:27.740218   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:27.740277   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:27.768320   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:27.768342   51251 cri.go:89] found id: ""
	I1018 17:45:27.768351   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:27.768416   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:27.772360   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:27.772471   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:27.797997   51251 cri.go:89] found id: ""
	I1018 17:45:27.798018   51251 logs.go:282] 0 containers: []
	W1018 17:45:27.798026   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:27.798049   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:27.798061   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:27.824302   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:27.824379   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:27.859099   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:27.859131   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:27.889803   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:27.889830   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:27.902196   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:27.902221   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:27.958924   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:27.958960   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:28.038453   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:28.038489   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:28.067717   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:28.067748   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:28.156959   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:28.156998   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:28.189533   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:28.189561   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:28.296814   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:28.296848   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:28.370306   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:28.360661    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.362171    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.362714    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.364316    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.364866    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:28.360661    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.362171    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.362714    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.364316    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.364866    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:30.870515   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:30.881788   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:30.881863   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:30.910070   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:30.910091   51251 cri.go:89] found id: ""
	I1018 17:45:30.910099   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:30.910154   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:30.914699   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:30.914767   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:30.944925   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:30.944970   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:30.944975   51251 cri.go:89] found id: ""
	I1018 17:45:30.944982   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:30.945037   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:30.948747   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:30.954312   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:30.954375   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:30.992317   51251 cri.go:89] found id: ""
	I1018 17:45:30.992339   51251 logs.go:282] 0 containers: []
	W1018 17:45:30.992347   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:30.992353   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:30.992409   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:31.020830   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:31.020849   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:31.020853   51251 cri.go:89] found id: ""
	I1018 17:45:31.020860   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:31.020918   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:31.025302   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:31.028979   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:31.029048   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:31.066137   51251 cri.go:89] found id: ""
	I1018 17:45:31.066238   51251 logs.go:282] 0 containers: []
	W1018 17:45:31.066262   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:31.066295   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:31.066401   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:31.093628   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:31.093651   51251 cri.go:89] found id: ""
	I1018 17:45:31.093659   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:31.093747   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:31.097751   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:31.097830   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:31.126496   51251 cri.go:89] found id: ""
	I1018 17:45:31.126517   51251 logs.go:282] 0 containers: []
	W1018 17:45:31.126526   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:31.126535   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:31.126547   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:31.199157   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:31.190529    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.191738    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.193086    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.193754    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.195583    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:31.190529    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.191738    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.193086    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.193754    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.195583    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:31.199180   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:31.199192   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:31.227645   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:31.227672   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:31.299176   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:31.299211   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:31.331846   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:31.331870   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:31.408603   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:31.408637   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:31.443678   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:31.443708   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:31.543336   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:31.543370   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:31.584237   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:31.584267   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:31.657778   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:31.657815   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:31.687304   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:31.687331   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:34.200278   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:34.213848   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:34.213915   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:34.240838   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:34.240860   51251 cri.go:89] found id: ""
	I1018 17:45:34.240874   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:34.240930   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:34.244825   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:34.244901   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:34.271020   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:34.271040   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:34.271044   51251 cri.go:89] found id: ""
	I1018 17:45:34.271052   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:34.271106   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:34.274974   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:34.278648   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:34.278748   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:34.306959   51251 cri.go:89] found id: ""
	I1018 17:45:34.306980   51251 logs.go:282] 0 containers: []
	W1018 17:45:34.306988   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:34.307023   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:34.307092   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:34.332551   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:34.332573   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:34.332578   51251 cri.go:89] found id: ""
	I1018 17:45:34.332585   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:34.332641   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:34.336514   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:34.340414   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:34.340491   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:34.366530   51251 cri.go:89] found id: ""
	I1018 17:45:34.366556   51251 logs.go:282] 0 containers: []
	W1018 17:45:34.366566   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:34.366572   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:34.366633   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:34.393555   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:34.393573   51251 cri.go:89] found id: ""
	I1018 17:45:34.393581   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:34.393637   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:34.397566   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:34.397635   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:34.424542   51251 cri.go:89] found id: ""
	I1018 17:45:34.424566   51251 logs.go:282] 0 containers: []
	W1018 17:45:34.424575   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:34.424584   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:34.424595   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:34.436112   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:34.436137   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:34.507631   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:34.499819    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.500689    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.501741    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.502269    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.503964    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:34.499819    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.500689    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.501741    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.502269    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.503964    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:34.507654   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:34.507666   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:34.562029   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:34.562062   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:34.599739   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:34.599770   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:34.628468   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:34.628493   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:34.702022   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:34.702053   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:34.731823   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:34.731851   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:34.830492   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:34.830526   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:34.860325   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:34.860350   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:34.928523   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:34.928564   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:37.460864   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:37.472124   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:37.472190   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:37.499832   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:37.499854   51251 cri.go:89] found id: ""
	I1018 17:45:37.499862   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:37.499920   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:37.503595   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:37.503663   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:37.531543   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:37.531563   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:37.531569   51251 cri.go:89] found id: ""
	I1018 17:45:37.531576   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:37.531630   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:37.535265   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:37.538643   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:37.538712   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:37.565328   51251 cri.go:89] found id: ""
	I1018 17:45:37.565359   51251 logs.go:282] 0 containers: []
	W1018 17:45:37.565368   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:37.565374   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:37.565434   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:37.602468   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:37.602489   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:37.602494   51251 cri.go:89] found id: ""
	I1018 17:45:37.602501   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:37.602557   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:37.606311   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:37.609849   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:37.609919   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:37.640018   51251 cri.go:89] found id: ""
	I1018 17:45:37.640087   51251 logs.go:282] 0 containers: []
	W1018 17:45:37.640110   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:37.640131   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:37.640216   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:37.666232   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:37.666305   51251 cri.go:89] found id: ""
	I1018 17:45:37.666334   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:37.666402   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:37.669826   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:37.669905   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:37.696068   51251 cri.go:89] found id: ""
	I1018 17:45:37.696104   51251 logs.go:282] 0 containers: []
	W1018 17:45:37.696112   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:37.696121   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:37.696158   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:37.767014   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:37.767049   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:37.799133   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:37.799158   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:37.883995   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:37.884029   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:37.919112   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:37.919145   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:37.968245   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:37.968269   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:38.008695   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:38.008740   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:38.109431   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:38.109506   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:38.124458   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:38.124529   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:38.217277   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:38.191743    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.192499    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.207164    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.208077    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.209702    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:38.191743    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.192499    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.207164    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.208077    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.209702    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:38.217297   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:38.217310   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:38.247001   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:38.247027   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:40.816985   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:40.827390   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:40.827474   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:40.854344   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:40.854363   51251 cri.go:89] found id: ""
	I1018 17:45:40.854371   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:40.854426   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:40.858780   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:40.858879   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:40.888649   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:40.888707   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:40.888726   51251 cri.go:89] found id: ""
	I1018 17:45:40.888754   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:40.888823   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:40.893141   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:40.897039   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:40.897111   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:40.930280   51251 cri.go:89] found id: ""
	I1018 17:45:40.930304   51251 logs.go:282] 0 containers: []
	W1018 17:45:40.930313   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:40.930319   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:40.930375   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:40.957741   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:40.957764   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:40.957769   51251 cri.go:89] found id: ""
	I1018 17:45:40.957777   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:40.957854   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:40.962938   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:40.967322   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:40.967388   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:40.995139   51251 cri.go:89] found id: ""
	I1018 17:45:40.995216   51251 logs.go:282] 0 containers: []
	W1018 17:45:40.995230   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:40.995237   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:40.995304   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:41.025259   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:41.025280   51251 cri.go:89] found id: ""
	I1018 17:45:41.025287   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:41.025344   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:41.029459   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:41.029553   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:41.055678   51251 cri.go:89] found id: ""
	I1018 17:45:41.055710   51251 logs.go:282] 0 containers: []
	W1018 17:45:41.055719   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:41.055728   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:41.055745   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:41.097365   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:41.097395   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:41.108644   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:41.108669   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:41.152656   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:41.152685   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:41.240199   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:41.240234   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:41.347931   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:41.347967   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:41.414489   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:41.405260    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.405872    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.407642    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.408232    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.410751    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:41.405260    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.405872    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.407642    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.408232    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.410751    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:41.414511   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:41.414525   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:41.440777   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:41.440802   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:41.496567   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:41.496602   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:41.569402   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:41.569445   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:41.599116   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:41.599143   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:44.128092   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:44.139312   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:44.139380   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:44.166514   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:44.166533   51251 cri.go:89] found id: ""
	I1018 17:45:44.166541   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:44.166596   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:44.170245   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:44.170317   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:44.210379   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:44.210397   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:44.210402   51251 cri.go:89] found id: ""
	I1018 17:45:44.210410   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:44.210464   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:44.214239   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:44.217585   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:44.217650   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:44.242978   51251 cri.go:89] found id: ""
	I1018 17:45:44.243001   51251 logs.go:282] 0 containers: []
	W1018 17:45:44.243009   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:44.243016   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:44.243069   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:44.270660   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:44.270680   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:44.270685   51251 cri.go:89] found id: ""
	I1018 17:45:44.270692   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:44.270746   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:44.274435   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:44.278022   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:44.278090   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:44.314849   51251 cri.go:89] found id: ""
	I1018 17:45:44.314873   51251 logs.go:282] 0 containers: []
	W1018 17:45:44.314881   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:44.314887   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:44.314951   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:44.345002   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:44.345025   51251 cri.go:89] found id: ""
	I1018 17:45:44.345034   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:44.345091   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:44.348718   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:44.348785   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:44.373779   51251 cri.go:89] found id: ""
	I1018 17:45:44.373804   51251 logs.go:282] 0 containers: []
	W1018 17:45:44.373812   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:44.373828   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:44.373839   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:44.448448   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:44.448482   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:44.479822   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:44.479848   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:44.583615   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:44.583649   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:44.597191   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:44.597217   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:44.623357   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:44.623385   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:44.680939   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:44.680970   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:44.715142   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:44.715173   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:44.742106   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:44.742133   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:44.808539   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:44.799128    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.799968    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.801462    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.801790    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.803327    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:44.799128    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.799968    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.801462    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.801790    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.803327    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:44.808609   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:44.808640   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:44.878644   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:44.878682   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:47.415612   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:47.426226   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:47.426291   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:47.453489   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:47.453509   51251 cri.go:89] found id: ""
	I1018 17:45:47.453517   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:47.453571   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:47.457326   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:47.457406   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:47.482854   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:47.482921   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:47.482931   51251 cri.go:89] found id: ""
	I1018 17:45:47.482939   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:47.482996   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:47.487182   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:47.490682   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:47.490788   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:47.518326   51251 cri.go:89] found id: ""
	I1018 17:45:47.518348   51251 logs.go:282] 0 containers: []
	W1018 17:45:47.518357   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:47.518364   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:47.518423   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:47.545707   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:47.545729   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:47.545734   51251 cri.go:89] found id: ""
	I1018 17:45:47.545742   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:47.545795   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:47.549377   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:47.552749   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:47.552816   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:47.578086   51251 cri.go:89] found id: ""
	I1018 17:45:47.578108   51251 logs.go:282] 0 containers: []
	W1018 17:45:47.578116   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:47.578122   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:47.578179   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:47.621041   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:47.621110   51251 cri.go:89] found id: ""
	I1018 17:45:47.621124   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:47.621185   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:47.624873   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:47.624982   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:47.651153   51251 cri.go:89] found id: ""
	I1018 17:45:47.651180   51251 logs.go:282] 0 containers: []
	W1018 17:45:47.651189   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:47.651198   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:47.651227   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:47.748488   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:47.748523   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:47.816047   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:47.807483    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.808149    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.809893    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.810874    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.812453    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:47.807483    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.808149    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.809893    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.810874    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.812453    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:47.816068   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:47.816080   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:47.845226   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:47.845251   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:47.898646   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:47.898681   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:47.939440   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:47.939471   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:47.973436   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:47.973499   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:48.008222   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:48.008264   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:48.022115   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:48.022146   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:48.101167   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:48.101270   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:48.133470   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:48.133539   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:50.714735   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:50.728888   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:50.729016   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:50.759926   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:50.759949   51251 cri.go:89] found id: ""
	I1018 17:45:50.759958   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:50.760018   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:50.764094   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:50.764177   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:50.790739   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:50.790770   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:50.790776   51251 cri.go:89] found id: ""
	I1018 17:45:50.790784   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:50.790848   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:50.794745   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:50.798617   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:50.798692   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:50.827817   51251 cri.go:89] found id: ""
	I1018 17:45:50.827854   51251 logs.go:282] 0 containers: []
	W1018 17:45:50.827863   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:50.827870   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:50.827952   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:50.856700   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:50.856719   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:50.856723   51251 cri.go:89] found id: ""
	I1018 17:45:50.856731   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:50.856784   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:50.860815   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:50.864675   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:50.864745   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:50.889856   51251 cri.go:89] found id: ""
	I1018 17:45:50.889881   51251 logs.go:282] 0 containers: []
	W1018 17:45:50.889889   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:50.889896   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:50.889976   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:50.918684   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:50.918708   51251 cri.go:89] found id: ""
	I1018 17:45:50.918716   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:50.918800   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:50.924460   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:50.924531   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:50.951436   51251 cri.go:89] found id: ""
	I1018 17:45:50.951457   51251 logs.go:282] 0 containers: []
	W1018 17:45:50.951465   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:50.951475   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:50.951491   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:50.967914   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:50.967945   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:51.025758   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:51.025791   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:51.076423   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:51.076458   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:51.107878   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:51.107909   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:51.140881   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:51.140910   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:51.218816   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:51.218847   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:51.285410   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:51.278013    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.278510    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.279993    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.280335    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.281812    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:51.278013    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.278510    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.279993    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.280335    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.281812    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:51.285432   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:51.285444   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:51.314747   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:51.314775   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:51.388168   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:51.388242   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:51.424772   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:51.424801   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:54.026323   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:54.037679   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:54.037753   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:54.064502   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:54.064524   51251 cri.go:89] found id: ""
	I1018 17:45:54.064532   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:54.064585   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:54.068305   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:54.068376   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:54.097996   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:54.098018   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:54.098023   51251 cri.go:89] found id: ""
	I1018 17:45:54.098031   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:54.098085   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:54.102024   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:54.105866   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:54.105944   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:54.139891   51251 cri.go:89] found id: ""
	I1018 17:45:54.139915   51251 logs.go:282] 0 containers: []
	W1018 17:45:54.139924   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:54.139931   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:54.139986   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:54.166319   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:54.166343   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:54.166347   51251 cri.go:89] found id: ""
	I1018 17:45:54.166355   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:54.166420   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:54.170521   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:54.174527   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:54.174590   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:54.219178   51251 cri.go:89] found id: ""
	I1018 17:45:54.219212   51251 logs.go:282] 0 containers: []
	W1018 17:45:54.219220   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:54.219227   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:54.219283   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:54.246579   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:54.246602   51251 cri.go:89] found id: ""
	I1018 17:45:54.246610   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:54.246667   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:54.250546   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:54.250651   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:54.282408   51251 cri.go:89] found id: ""
	I1018 17:45:54.282432   51251 logs.go:282] 0 containers: []
	W1018 17:45:54.282440   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:54.282449   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:54.282460   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:54.367430   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:54.348041    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.348865    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.361407    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.362108    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.363737    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:54.348041    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.348865    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.361407    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.362108    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.363737    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:54.367454   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:54.367467   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:54.393831   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:54.393863   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:54.435123   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:54.435155   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:54.491144   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:54.491188   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:54.527193   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:54.527223   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:54.604327   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:54.604369   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:54.636282   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:54.636312   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:54.714664   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:54.714698   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:54.752480   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:54.752508   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:54.858349   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:54.858422   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:57.373300   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:57.384246   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:57.384335   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:57.415506   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:57.415571   51251 cri.go:89] found id: ""
	I1018 17:45:57.415595   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:57.415671   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:57.419389   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:57.419503   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:57.445186   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:57.445206   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:57.445211   51251 cri.go:89] found id: ""
	I1018 17:45:57.445219   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:57.445281   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:57.449004   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:57.452413   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:57.452492   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:57.477864   51251 cri.go:89] found id: ""
	I1018 17:45:57.477888   51251 logs.go:282] 0 containers: []
	W1018 17:45:57.477896   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:57.477903   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:57.477962   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:57.504898   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:57.504920   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:57.504931   51251 cri.go:89] found id: ""
	I1018 17:45:57.504977   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:57.505034   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:57.509061   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:57.513614   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:57.513685   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:57.544310   51251 cri.go:89] found id: ""
	I1018 17:45:57.544332   51251 logs.go:282] 0 containers: []
	W1018 17:45:57.544340   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:57.544346   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:57.544403   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:57.571245   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:57.571266   51251 cri.go:89] found id: ""
	I1018 17:45:57.571274   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:57.571331   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:57.575106   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:57.575176   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:57.606111   51251 cri.go:89] found id: ""
	I1018 17:45:57.606144   51251 logs.go:282] 0 containers: []
	W1018 17:45:57.606154   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:57.606162   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:57.606175   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:57.634184   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:57.634212   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:57.700157   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:57.700193   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:57.740730   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:57.740759   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:57.767473   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:57.767501   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:57.792761   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:57.792788   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:57.872610   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:57.872686   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:57.970465   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:57.970503   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:57.983943   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:57.983969   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:58.065431   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:58.056364    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.057407    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.058182    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.059825    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.060434    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:58.056364    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.057407    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.058182    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.059825    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.060434    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:58.065498   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:58.065512   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:58.140361   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:58.140407   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:00.709339   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:00.720914   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:00.721109   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:00.749016   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:00.749036   51251 cri.go:89] found id: ""
	I1018 17:46:00.749043   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:00.749098   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:00.752785   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:00.752913   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:00.780089   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:00.780157   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:00.780174   51251 cri.go:89] found id: ""
	I1018 17:46:00.780195   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:00.780277   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:00.784027   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:00.787918   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:00.787984   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:00.815886   51251 cri.go:89] found id: ""
	I1018 17:46:00.815911   51251 logs.go:282] 0 containers: []
	W1018 17:46:00.815920   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:00.815927   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:00.815984   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:00.843641   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:00.843672   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:00.843677   51251 cri.go:89] found id: ""
	I1018 17:46:00.843690   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:00.843749   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:00.857213   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:00.861599   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:00.861750   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:00.895883   51251 cri.go:89] found id: ""
	I1018 17:46:00.895957   51251 logs.go:282] 0 containers: []
	W1018 17:46:00.895981   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:00.896000   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:00.896070   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:00.925992   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:00.926061   51251 cri.go:89] found id: ""
	I1018 17:46:00.926086   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:00.926167   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:00.930024   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:00.930108   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:00.958457   51251 cri.go:89] found id: ""
	I1018 17:46:00.958482   51251 logs.go:282] 0 containers: []
	W1018 17:46:00.958490   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:00.958499   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:00.958511   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:01.035152   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:01.035187   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:01.069631   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:01.069662   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:01.099442   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:01.099466   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:01.185919   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:01.185957   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:01.233776   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:01.233801   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:01.247414   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:01.247442   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:01.275612   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:01.275640   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:01.332794   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:01.332829   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:01.367809   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:01.367840   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:01.464892   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:01.464929   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:01.535577   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:01.527773   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.528316   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.530190   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.530564   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.531863   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:01.527773   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.528316   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.530190   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.530564   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.531863   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:04.037058   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:04.047958   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:04.048043   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:04.080745   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:04.080770   51251 cri.go:89] found id: ""
	I1018 17:46:04.080779   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:04.080837   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:04.084749   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:04.084819   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:04.113194   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:04.113268   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:04.113275   51251 cri.go:89] found id: ""
	I1018 17:46:04.113283   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:04.113374   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:04.117058   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:04.121021   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:04.121088   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:04.150209   51251 cri.go:89] found id: ""
	I1018 17:46:04.150233   51251 logs.go:282] 0 containers: []
	W1018 17:46:04.150242   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:04.150248   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:04.150308   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:04.182648   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:04.182719   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:04.182732   51251 cri.go:89] found id: ""
	I1018 17:46:04.182740   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:04.182811   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:04.187068   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:04.191187   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:04.191265   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:04.226123   51251 cri.go:89] found id: ""
	I1018 17:46:04.226147   51251 logs.go:282] 0 containers: []
	W1018 17:46:04.226158   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:04.226165   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:04.226226   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:04.252111   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:04.252132   51251 cri.go:89] found id: ""
	I1018 17:46:04.252141   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:04.252196   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:04.255953   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:04.256026   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:04.287389   51251 cri.go:89] found id: ""
	I1018 17:46:04.287415   51251 logs.go:282] 0 containers: []
	W1018 17:46:04.287423   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:04.287432   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:04.287443   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:04.321947   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:04.321973   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:04.430342   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:04.430376   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:04.442744   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:04.442769   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:04.506948   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:04.498862   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.499448   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.501006   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.501596   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.503108   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:04.498862   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.499448   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.501006   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.501596   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.503108   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:04.507014   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:04.507043   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:04.543328   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:04.543361   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:04.572765   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:04.572798   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:04.602775   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:04.602801   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:04.658777   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:04.658812   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:04.732490   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:04.732537   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:04.759977   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:04.760005   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:07.339053   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:07.349656   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:07.349760   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:07.379978   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:07.380001   51251 cri.go:89] found id: ""
	I1018 17:46:07.380011   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:07.380093   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:07.383927   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:07.384018   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:07.409769   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:07.409800   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:07.409806   51251 cri.go:89] found id: ""
	I1018 17:46:07.409814   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:07.409902   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:07.413658   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:07.416960   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:07.417067   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:07.442892   51251 cri.go:89] found id: ""
	I1018 17:46:07.442916   51251 logs.go:282] 0 containers: []
	W1018 17:46:07.442924   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:07.442930   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:07.442989   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:07.469419   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:07.469440   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:07.469445   51251 cri.go:89] found id: ""
	I1018 17:46:07.469452   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:07.469508   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:07.473607   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:07.477386   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:07.477501   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:07.504080   51251 cri.go:89] found id: ""
	I1018 17:46:07.504105   51251 logs.go:282] 0 containers: []
	W1018 17:46:07.504116   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:07.504122   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:07.504231   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:07.531758   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:07.531781   51251 cri.go:89] found id: ""
	I1018 17:46:07.531790   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:07.531870   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:07.535733   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:07.535830   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:07.564437   51251 cri.go:89] found id: ""
	I1018 17:46:07.564463   51251 logs.go:282] 0 containers: []
	W1018 17:46:07.564471   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:07.564480   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:07.564524   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:07.628243   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:07.628278   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:07.662025   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:07.662052   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:07.764863   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:07.764897   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:07.776837   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:07.776865   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:07.847586   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:07.839604   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.840186   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.841835   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.842344   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.843875   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:07.839604   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.840186   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.841835   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.842344   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.843875   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:07.847606   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:07.847622   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:07.880085   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:07.880117   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:07.963636   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:07.963671   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:07.994194   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:07.994222   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:08.025564   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:08.025595   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:08.108415   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:08.108451   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:10.642798   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:10.653476   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:10.653548   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:10.679376   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:10.679398   51251 cri.go:89] found id: ""
	I1018 17:46:10.679407   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:10.679465   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:10.683355   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:10.683427   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:10.710429   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:10.710450   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:10.710454   51251 cri.go:89] found id: ""
	I1018 17:46:10.710461   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:10.710513   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:10.714130   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:10.717443   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:10.717506   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:10.744042   51251 cri.go:89] found id: ""
	I1018 17:46:10.744064   51251 logs.go:282] 0 containers: []
	W1018 17:46:10.744071   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:10.744078   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:10.744132   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:10.773166   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:10.773191   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:10.773196   51251 cri.go:89] found id: ""
	I1018 17:46:10.773203   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:10.773282   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:10.777442   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:10.781226   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:10.781299   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:10.808886   51251 cri.go:89] found id: ""
	I1018 17:46:10.808909   51251 logs.go:282] 0 containers: []
	W1018 17:46:10.808917   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:10.808924   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:10.809009   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:10.836634   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:10.836656   51251 cri.go:89] found id: ""
	I1018 17:46:10.836664   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:10.836720   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:10.840695   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:10.840772   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:10.869735   51251 cri.go:89] found id: ""
	I1018 17:46:10.869799   51251 logs.go:282] 0 containers: []
	W1018 17:46:10.869812   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:10.869822   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:10.869833   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:10.949626   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:10.949665   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:11.057346   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:11.057383   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:11.139105   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:11.139141   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:11.170764   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:11.170861   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:11.214148   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:11.214173   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:11.245381   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:11.245409   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:11.258609   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:11.258636   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:11.329040   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:11.320826   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.321453   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.322971   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.323467   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.325006   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:11.320826   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.321453   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.322971   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.323467   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.325006   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:11.329060   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:11.329072   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:11.354686   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:11.354710   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:11.393844   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:11.393872   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:13.965067   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:13.977065   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:13.977139   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:14.006565   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:14.006590   51251 cri.go:89] found id: ""
	I1018 17:46:14.006600   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:14.006694   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:14.011312   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:14.011387   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:14.040339   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:14.040367   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:14.040372   51251 cri.go:89] found id: ""
	I1018 17:46:14.040380   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:14.040437   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:14.044065   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:14.047760   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:14.047831   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:14.074918   51251 cri.go:89] found id: ""
	I1018 17:46:14.074943   51251 logs.go:282] 0 containers: []
	W1018 17:46:14.074952   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:14.074960   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:14.075023   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:14.107504   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:14.107526   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:14.107531   51251 cri.go:89] found id: ""
	I1018 17:46:14.107539   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:14.107591   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:14.111227   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:14.114719   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:14.114811   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:14.145967   51251 cri.go:89] found id: ""
	I1018 17:46:14.146042   51251 logs.go:282] 0 containers: []
	W1018 17:46:14.146062   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:14.146082   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:14.146164   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:14.186824   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:14.186888   51251 cri.go:89] found id: ""
	I1018 17:46:14.186910   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:14.186990   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:14.190545   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:14.190628   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:14.226876   51251 cri.go:89] found id: ""
	I1018 17:46:14.226971   51251 logs.go:282] 0 containers: []
	W1018 17:46:14.226994   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:14.227020   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:14.227045   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:14.329164   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:14.329201   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:14.397274   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:14.389270   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.390097   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.391638   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.392076   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.393694   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:14.389270   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.390097   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.391638   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.392076   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.393694   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:14.397296   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:14.397309   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:14.426769   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:14.426796   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:14.486615   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:14.486650   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:14.559349   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:14.559386   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:14.587426   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:14.587455   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:14.664068   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:14.664104   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:14.675861   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:14.675886   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:14.708879   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:14.708911   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:14.736861   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:14.736890   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:17.281896   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:17.292988   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:17.293081   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:17.321611   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:17.321634   51251 cri.go:89] found id: ""
	I1018 17:46:17.321642   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:17.321697   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:17.325317   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:17.325398   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:17.352512   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:17.352534   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:17.352538   51251 cri.go:89] found id: ""
	I1018 17:46:17.352546   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:17.352599   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:17.357098   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:17.360560   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:17.360677   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:17.390732   51251 cri.go:89] found id: ""
	I1018 17:46:17.390762   51251 logs.go:282] 0 containers: []
	W1018 17:46:17.390770   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:17.390778   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:17.390842   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:17.419824   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:17.419846   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:17.419851   51251 cri.go:89] found id: ""
	I1018 17:46:17.419858   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:17.419916   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:17.423710   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:17.427116   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:17.427185   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:17.453579   51251 cri.go:89] found id: ""
	I1018 17:46:17.453602   51251 logs.go:282] 0 containers: []
	W1018 17:46:17.453610   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:17.453617   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:17.453705   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:17.486285   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:17.486309   51251 cri.go:89] found id: ""
	I1018 17:46:17.486318   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:17.486372   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:17.490015   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:17.490104   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:17.518259   51251 cri.go:89] found id: ""
	I1018 17:46:17.518284   51251 logs.go:282] 0 containers: []
	W1018 17:46:17.518292   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:17.518301   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:17.518332   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:17.614000   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:17.614035   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:17.626518   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:17.626553   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:17.684157   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:17.684191   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:17.730343   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:17.730369   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:17.798308   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:17.789990   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.790724   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.792367   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.792674   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.794211   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:17.789990   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.790724   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.792367   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.792674   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.794211   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:17.798326   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:17.798338   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:17.823833   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:17.823857   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:17.865773   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:17.865799   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:17.935865   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:17.935900   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:17.978061   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:17.978088   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:18.006175   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:18.006205   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:20.594229   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:20.605152   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:20.605223   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:20.633212   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:20.633234   51251 cri.go:89] found id: ""
	I1018 17:46:20.633243   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:20.633310   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:20.637046   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:20.637118   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:20.663217   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:20.663238   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:20.663246   51251 cri.go:89] found id: ""
	I1018 17:46:20.663253   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:20.663325   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:20.667226   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:20.670621   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:20.670719   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:20.698213   51251 cri.go:89] found id: ""
	I1018 17:46:20.698235   51251 logs.go:282] 0 containers: []
	W1018 17:46:20.698244   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:20.698287   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:20.698367   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:20.730404   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:20.730434   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:20.730439   51251 cri.go:89] found id: ""
	I1018 17:46:20.730447   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:20.730519   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:20.734442   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:20.738131   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:20.738222   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:20.773079   51251 cri.go:89] found id: ""
	I1018 17:46:20.773149   51251 logs.go:282] 0 containers: []
	W1018 17:46:20.773171   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:20.773193   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:20.773277   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:20.800462   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:20.800534   51251 cri.go:89] found id: ""
	I1018 17:46:20.800569   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:20.800664   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:20.805115   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:20.805213   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:20.830418   51251 cri.go:89] found id: ""
	I1018 17:46:20.830442   51251 logs.go:282] 0 containers: []
	W1018 17:46:20.830451   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:20.830459   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:20.830470   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:20.912043   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:20.912075   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:20.938545   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:20.938572   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:20.977936   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:20.978010   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:21.013920   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:21.013950   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:21.119416   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:21.119450   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:21.132924   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:21.133048   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:21.220628   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:21.211038   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.212205   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.213238   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.213888   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.215798   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:21.211038   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.212205   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.213238   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.213888   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.215798   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:21.220657   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:21.220677   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:21.249593   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:21.249618   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:21.329125   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:21.329162   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:21.387066   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:21.387097   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:23.926900   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:23.937764   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:23.937832   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:23.976069   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:23.976129   51251 cri.go:89] found id: ""
	I1018 17:46:23.976159   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:23.976235   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:23.979863   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:23.979943   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:24.009930   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:24.009950   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:24.009954   51251 cri.go:89] found id: ""
	I1018 17:46:24.009963   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:24.010025   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:24.014274   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:24.018246   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:24.018317   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:24.046546   51251 cri.go:89] found id: ""
	I1018 17:46:24.046571   51251 logs.go:282] 0 containers: []
	W1018 17:46:24.046589   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:24.046596   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:24.046659   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:24.073391   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:24.073411   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:24.073416   51251 cri.go:89] found id: ""
	I1018 17:46:24.073428   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:24.073485   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:24.077447   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:24.081009   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:24.081083   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:24.108804   51251 cri.go:89] found id: ""
	I1018 17:46:24.108828   51251 logs.go:282] 0 containers: []
	W1018 17:46:24.108837   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:24.108843   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:24.108905   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:24.144321   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:24.144348   51251 cri.go:89] found id: ""
	I1018 17:46:24.144357   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:24.144413   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:24.148488   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:24.148592   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:24.176586   51251 cri.go:89] found id: ""
	I1018 17:46:24.176611   51251 logs.go:282] 0 containers: []
	W1018 17:46:24.176619   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:24.176629   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:24.176640   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:24.254257   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:24.245066   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.246406   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.248217   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.248923   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.250447   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:24.245066   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.246406   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.248217   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.248923   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.250447   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:24.254278   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:24.254290   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:24.281646   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:24.281673   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:24.354939   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:24.354974   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:24.383116   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:24.383140   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:24.462892   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:24.462927   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:24.504197   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:24.504228   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:24.562928   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:24.562961   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:24.599399   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:24.599433   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:24.631679   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:24.631746   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:24.732308   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:24.732344   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:27.244674   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:27.255895   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:27.256012   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:27.287040   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:27.287060   51251 cri.go:89] found id: ""
	I1018 17:46:27.287069   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:27.287149   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:27.290894   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:27.290963   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:27.320255   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:27.320275   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:27.320280   51251 cri.go:89] found id: ""
	I1018 17:46:27.320287   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:27.320342   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:27.323980   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:27.327547   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:27.327617   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:27.352735   51251 cri.go:89] found id: ""
	I1018 17:46:27.352759   51251 logs.go:282] 0 containers: []
	W1018 17:46:27.352768   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:27.352774   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:27.352857   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:27.379505   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:27.379527   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:27.379532   51251 cri.go:89] found id: ""
	I1018 17:46:27.379539   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:27.379595   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:27.383294   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:27.386911   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:27.386986   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:27.415912   51251 cri.go:89] found id: ""
	I1018 17:46:27.415934   51251 logs.go:282] 0 containers: []
	W1018 17:46:27.415943   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:27.415949   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:27.416005   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:27.445650   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:27.445672   51251 cri.go:89] found id: ""
	I1018 17:46:27.445682   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:27.445741   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:27.449604   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:27.449704   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:27.484794   51251 cri.go:89] found id: ""
	I1018 17:46:27.484859   51251 logs.go:282] 0 containers: []
	W1018 17:46:27.484882   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:27.484904   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:27.484958   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:27.584293   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:27.584332   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:27.648407   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:27.648440   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:27.676738   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:27.676766   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:27.689349   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:27.689383   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:27.762040   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:27.753582   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.754358   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.756209   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.756792   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.758400   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:27.753582   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.754358   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.756209   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.756792   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.758400   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:27.762060   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:27.762074   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:27.788162   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:27.788190   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:27.822151   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:27.822180   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:27.891958   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:27.891993   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:27.920389   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:27.920413   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:28.000828   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:28.000902   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:30.539090   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:30.549624   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:30.549693   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:30.576191   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:30.576210   51251 cri.go:89] found id: ""
	I1018 17:46:30.576218   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:30.576270   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:30.580032   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:30.580143   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:30.605554   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:30.605576   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:30.605582   51251 cri.go:89] found id: ""
	I1018 17:46:30.605600   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:30.605693   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:30.609432   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:30.613226   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:30.613297   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:30.640206   51251 cri.go:89] found id: ""
	I1018 17:46:30.640232   51251 logs.go:282] 0 containers: []
	W1018 17:46:30.640241   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:30.640248   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:30.640305   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:30.667995   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:30.668022   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:30.668027   51251 cri.go:89] found id: ""
	I1018 17:46:30.668035   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:30.668090   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:30.671800   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:30.675538   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:30.675607   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:30.700530   51251 cri.go:89] found id: ""
	I1018 17:46:30.700554   51251 logs.go:282] 0 containers: []
	W1018 17:46:30.700562   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:30.700568   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:30.700623   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:30.728589   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:30.728610   51251 cri.go:89] found id: ""
	I1018 17:46:30.728618   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:30.728673   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:30.732322   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:30.732414   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:30.757553   51251 cri.go:89] found id: ""
	I1018 17:46:30.757577   51251 logs.go:282] 0 containers: []
	W1018 17:46:30.757586   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:30.757594   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:30.757635   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:30.823888   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:30.816309   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.816862   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.818339   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.818806   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.820240   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:30.816309   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.816862   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.818339   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.818806   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.820240   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:30.823908   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:30.823921   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:30.849213   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:30.849239   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:30.906353   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:30.906387   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:30.995137   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:30.995173   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:31.081727   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:31.081761   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:31.125969   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:31.125994   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:31.232441   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:31.232474   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:31.244403   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:31.244430   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:31.288661   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:31.288704   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:31.322411   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:31.322439   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:33.853119   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:33.864167   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:33.864236   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:33.897397   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:33.897420   51251 cri.go:89] found id: ""
	I1018 17:46:33.897428   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:33.897485   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:33.901240   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:33.901310   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:33.929613   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:33.929646   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:33.929651   51251 cri.go:89] found id: ""
	I1018 17:46:33.929658   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:33.929735   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:33.933312   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:33.936856   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:33.936964   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:33.977530   51251 cri.go:89] found id: ""
	I1018 17:46:33.977558   51251 logs.go:282] 0 containers: []
	W1018 17:46:33.977566   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:33.977573   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:33.977631   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:34.012562   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:34.012584   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:34.012589   51251 cri.go:89] found id: ""
	I1018 17:46:34.012596   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:34.012656   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:34.016474   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:34.020781   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:34.020852   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:34.046987   51251 cri.go:89] found id: ""
	I1018 17:46:34.047014   51251 logs.go:282] 0 containers: []
	W1018 17:46:34.047022   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:34.047029   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:34.047086   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:34.076543   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:34.076564   51251 cri.go:89] found id: ""
	I1018 17:46:34.076575   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:34.076631   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:34.080378   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:34.080449   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:34.107694   51251 cri.go:89] found id: ""
	I1018 17:46:34.107716   51251 logs.go:282] 0 containers: []
	W1018 17:46:34.107724   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:34.107734   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:34.107745   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:34.119659   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:34.119686   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:34.177728   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:34.177831   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:34.238468   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:34.238509   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:34.321582   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:34.321620   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:34.353750   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:34.353776   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:34.384525   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:34.384552   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:34.462817   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:34.462849   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:34.494982   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:34.495010   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:34.598168   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:34.598203   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:34.675787   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:34.666968   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.667733   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.669584   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.670213   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.671781   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:34.666968   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.667733   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.669584   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.670213   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.671781   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:34.675809   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:34.675822   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:37.204073   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:37.217257   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:37.217324   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:37.242870   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:37.242892   51251 cri.go:89] found id: ""
	I1018 17:46:37.242900   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:37.242956   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:37.246583   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:37.246652   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:37.272095   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:37.272157   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:37.272174   51251 cri.go:89] found id: ""
	I1018 17:46:37.272195   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:37.272279   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:37.276536   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:37.280121   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:37.280190   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:37.305151   51251 cri.go:89] found id: ""
	I1018 17:46:37.305173   51251 logs.go:282] 0 containers: []
	W1018 17:46:37.305182   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:37.305188   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:37.305244   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:37.338068   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:37.338137   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:37.338155   51251 cri.go:89] found id: ""
	I1018 17:46:37.338191   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:37.338263   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:37.342725   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:37.346547   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:37.346621   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:37.374074   51251 cri.go:89] found id: ""
	I1018 17:46:37.374095   51251 logs.go:282] 0 containers: []
	W1018 17:46:37.374104   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:37.374110   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:37.374167   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:37.405324   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:37.405346   51251 cri.go:89] found id: ""
	I1018 17:46:37.405360   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:37.405434   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:37.409814   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:37.409899   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:37.435527   51251 cri.go:89] found id: ""
	I1018 17:46:37.435551   51251 logs.go:282] 0 containers: []
	W1018 17:46:37.435560   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:37.435568   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:37.435579   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:37.504448   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:37.496518   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.497134   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.498616   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.499058   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.500376   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:37.496518   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.497134   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.498616   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.499058   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.500376   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:37.504468   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:37.504482   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:37.533375   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:37.533403   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:37.598625   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:37.598661   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:37.634535   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:37.634563   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:37.717277   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:37.717311   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:37.818978   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:37.819016   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:37.832055   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:37.832084   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:37.904377   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:37.904408   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:37.938939   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:37.938966   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:37.981000   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:37.981027   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:40.513454   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:40.524358   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:40.524437   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:40.552377   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:40.552454   51251 cri.go:89] found id: ""
	I1018 17:46:40.552475   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:40.552563   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:40.556445   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:40.556565   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:40.582695   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:40.582726   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:40.582732   51251 cri.go:89] found id: ""
	I1018 17:46:40.582739   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:40.582814   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:40.586779   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:40.590379   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:40.590449   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:40.618010   51251 cri.go:89] found id: ""
	I1018 17:46:40.618034   51251 logs.go:282] 0 containers: []
	W1018 17:46:40.618050   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:40.618056   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:40.618113   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:40.648753   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:40.648776   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:40.648782   51251 cri.go:89] found id: ""
	I1018 17:46:40.648790   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:40.648848   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:40.652681   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:40.656399   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:40.656475   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:40.682133   51251 cri.go:89] found id: ""
	I1018 17:46:40.682157   51251 logs.go:282] 0 containers: []
	W1018 17:46:40.682165   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:40.682180   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:40.682236   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:40.709218   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:40.709242   51251 cri.go:89] found id: ""
	I1018 17:46:40.709250   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:40.709309   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:40.713679   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:40.713762   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:40.739858   51251 cri.go:89] found id: ""
	I1018 17:46:40.739881   51251 logs.go:282] 0 containers: []
	W1018 17:46:40.739889   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:40.739899   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:40.739910   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:40.767013   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:40.767039   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:40.815169   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:40.815198   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:40.828097   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:40.828174   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:40.854852   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:40.854880   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:40.928587   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:40.928623   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:40.967185   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:40.967264   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:41.043445   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:41.043480   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:41.073682   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:41.073706   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:41.167926   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:41.167960   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:41.279975   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:41.280011   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:41.354826   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:41.337935   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.339488   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.340251   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.347202   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.347805   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:41.337935   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.339488   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.340251   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.347202   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.347805   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:43.856192   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:43.867961   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:43.868072   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:43.894221   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:43.894243   51251 cri.go:89] found id: ""
	I1018 17:46:43.894252   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:43.894332   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:43.898170   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:43.898263   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:43.925956   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:43.926031   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:43.926050   51251 cri.go:89] found id: ""
	I1018 17:46:43.926070   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:43.926142   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:43.929746   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:43.933185   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:43.933255   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:43.959602   51251 cri.go:89] found id: ""
	I1018 17:46:43.959627   51251 logs.go:282] 0 containers: []
	W1018 17:46:43.959635   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:43.959647   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:43.959704   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:43.991256   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:43.991325   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:43.991354   51251 cri.go:89] found id: ""
	I1018 17:46:43.991375   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:43.991457   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:43.995372   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:43.999083   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:43.999191   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:44.027597   51251 cri.go:89] found id: ""
	I1018 17:46:44.027632   51251 logs.go:282] 0 containers: []
	W1018 17:46:44.027641   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:44.027647   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:44.027715   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:44.055061   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:44.055085   51251 cri.go:89] found id: ""
	I1018 17:46:44.055094   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:44.055163   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:44.059234   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:44.059339   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:44.087631   51251 cri.go:89] found id: ""
	I1018 17:46:44.087653   51251 logs.go:282] 0 containers: []
	W1018 17:46:44.087661   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:44.087670   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:44.087681   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:44.189442   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:44.189477   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:44.218935   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:44.218961   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:44.286708   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:44.286746   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:44.321434   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:44.321463   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:44.399455   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:44.399492   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:44.434475   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:44.434502   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:44.448230   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:44.448256   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:44.523028   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:44.515201   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.515969   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.517455   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.517964   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.519503   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:44.515201   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.515969   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.517455   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.517964   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.519503   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:44.523047   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:44.523060   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:44.559772   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:44.559799   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:44.632864   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:44.632968   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:47.163147   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:47.174684   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:47.174753   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:47.212548   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:47.212575   51251 cri.go:89] found id: ""
	I1018 17:46:47.212583   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:47.212638   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:47.216970   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:47.217043   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:47.246472   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:47.246547   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:47.246565   51251 cri.go:89] found id: ""
	I1018 17:46:47.246585   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:47.246669   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:47.252448   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:47.255988   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:47.256113   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:47.287109   51251 cri.go:89] found id: ""
	I1018 17:46:47.287134   51251 logs.go:282] 0 containers: []
	W1018 17:46:47.287144   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:47.287150   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:47.287211   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:47.316914   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:47.316964   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:47.316969   51251 cri.go:89] found id: ""
	I1018 17:46:47.316977   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:47.317032   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:47.320849   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:47.324385   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:47.324455   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:47.351869   51251 cri.go:89] found id: ""
	I1018 17:46:47.351894   51251 logs.go:282] 0 containers: []
	W1018 17:46:47.351902   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:47.351908   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:47.351963   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:47.378692   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:47.378712   51251 cri.go:89] found id: ""
	I1018 17:46:47.378720   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:47.378773   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:47.382267   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:47.382341   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:47.417848   51251 cri.go:89] found id: ""
	I1018 17:46:47.417914   51251 logs.go:282] 0 containers: []
	W1018 17:46:47.417928   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:47.417938   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:47.417953   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:47.515489   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:47.515527   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:47.598137   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:47.585088   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.586210   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.586811   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.592142   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.592951   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:47.585088   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.586210   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.586811   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.592142   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.592951   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:47.598159   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:47.598172   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:47.627147   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:47.627171   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:47.685715   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:47.685749   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:47.729509   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:47.729542   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:47.802620   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:47.802658   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:47.841366   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:47.841393   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:47.853500   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:47.853528   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:47.882085   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:47.882112   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:47.962102   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:47.962182   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:50.497378   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:50.509438   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:50.509515   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:50.536827   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:50.536845   51251 cri.go:89] found id: ""
	I1018 17:46:50.536853   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:50.536906   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:50.540656   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:50.540736   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:50.572295   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:50.572315   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:50.572319   51251 cri.go:89] found id: ""
	I1018 17:46:50.572326   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:50.572381   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:50.576114   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:50.579678   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:50.579767   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:50.604801   51251 cri.go:89] found id: ""
	I1018 17:46:50.604883   51251 logs.go:282] 0 containers: []
	W1018 17:46:50.604907   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:50.604953   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:50.605039   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:50.630628   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:50.630689   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:50.630709   51251 cri.go:89] found id: ""
	I1018 17:46:50.630731   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:50.630799   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:50.634652   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:50.638142   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:50.638211   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:50.668081   51251 cri.go:89] found id: ""
	I1018 17:46:50.668158   51251 logs.go:282] 0 containers: []
	W1018 17:46:50.668178   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:50.668199   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:50.668286   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:50.695569   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:50.695633   51251 cri.go:89] found id: ""
	I1018 17:46:50.695655   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:50.695739   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:50.699470   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:50.699542   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:50.727412   51251 cri.go:89] found id: ""
	I1018 17:46:50.727436   51251 logs.go:282] 0 containers: []
	W1018 17:46:50.727445   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:50.727454   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:50.727467   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:50.753408   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:50.753435   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:50.827768   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:50.827848   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:50.859978   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:50.860003   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:50.939527   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:50.939561   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:50.980682   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:50.980711   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:51.076628   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:51.076663   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:51.090191   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:51.090220   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:51.182260   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:51.173917   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.174843   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.176369   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.176776   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.178414   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:51.173917   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.174843   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.176369   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.176776   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.178414   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:51.182283   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:51.182295   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:51.232720   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:51.232749   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:51.308144   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:51.308178   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:53.837977   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:53.848545   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:53.848614   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:53.876495   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:53.876519   51251 cri.go:89] found id: ""
	I1018 17:46:53.876528   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:53.876595   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:53.880322   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:53.880394   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:53.907168   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:53.907231   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:53.907249   51251 cri.go:89] found id: ""
	I1018 17:46:53.907272   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:53.907357   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:53.911597   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:53.914987   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:53.915059   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:53.940518   51251 cri.go:89] found id: ""
	I1018 17:46:53.940542   51251 logs.go:282] 0 containers: []
	W1018 17:46:53.940551   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:53.940557   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:53.940616   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:53.978433   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:53.978457   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:53.978462   51251 cri.go:89] found id: ""
	I1018 17:46:53.978469   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:53.978524   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:53.982381   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:53.985948   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:53.986022   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:54.015365   51251 cri.go:89] found id: ""
	I1018 17:46:54.015389   51251 logs.go:282] 0 containers: []
	W1018 17:46:54.015403   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:54.015410   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:54.015469   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:54.043566   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:54.043585   51251 cri.go:89] found id: ""
	I1018 17:46:54.043594   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:54.043652   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:54.047469   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:54.047537   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:54.074756   51251 cri.go:89] found id: ""
	I1018 17:46:54.074779   51251 logs.go:282] 0 containers: []
	W1018 17:46:54.074788   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:54.074797   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:54.074836   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:54.105299   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:54.105329   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:54.181466   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:54.181501   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:54.274419   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:54.274455   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:54.312879   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:54.312907   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:54.417669   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:54.417744   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:54.429755   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:54.429780   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:54.498834   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:54.489425   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.491045   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.492004   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.493115   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.494863   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:54.489425   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.491045   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.492004   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.493115   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.494863   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:54.498906   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:54.498927   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:54.527210   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:54.527238   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:54.569700   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:54.569732   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:54.644529   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:54.644561   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:57.172362   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:57.183486   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:57.183556   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:57.221818   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:57.221836   51251 cri.go:89] found id: ""
	I1018 17:46:57.221844   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:57.221899   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:57.225454   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:57.225520   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:57.252169   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:57.252192   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:57.252197   51251 cri.go:89] found id: ""
	I1018 17:46:57.252206   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:57.252263   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:57.256351   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:57.259722   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:57.259804   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:57.286504   51251 cri.go:89] found id: ""
	I1018 17:46:57.286527   51251 logs.go:282] 0 containers: []
	W1018 17:46:57.286536   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:57.286542   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:57.286603   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:57.314232   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:57.314254   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:57.314259   51251 cri.go:89] found id: ""
	I1018 17:46:57.314267   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:57.314322   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:57.317847   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:57.320999   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:57.321074   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:57.346974   51251 cri.go:89] found id: ""
	I1018 17:46:57.346999   51251 logs.go:282] 0 containers: []
	W1018 17:46:57.347008   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:57.347014   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:57.347069   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:57.373499   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:57.373567   51251 cri.go:89] found id: ""
	I1018 17:46:57.373587   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:57.373664   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:57.377584   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:57.377703   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:57.407749   51251 cri.go:89] found id: ""
	I1018 17:46:57.407773   51251 logs.go:282] 0 containers: []
	W1018 17:46:57.407782   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:57.407790   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:57.407801   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:57.420407   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:57.420432   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:57.450356   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:57.450384   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:57.487363   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:57.487394   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:57.580373   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:57.580410   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:57.617494   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:57.617524   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:57.719190   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:57.719227   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:57.790068   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:57.780054   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.780444   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.782856   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.783240   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.785433   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:57.780054   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.780444   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.782856   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.783240   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.785433   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:57.790090   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:57.790104   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:57.849803   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:57.849835   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:57.881569   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:57.881600   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:57.911940   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:57.911966   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:00.495334   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:00.507616   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:00.507694   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:00.539238   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:00.539258   51251 cri.go:89] found id: ""
	I1018 17:47:00.539266   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:00.539323   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:00.543503   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:00.543571   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:00.574079   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:00.574112   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:00.574118   51251 cri.go:89] found id: ""
	I1018 17:47:00.574126   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:00.574199   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:00.578461   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:00.582394   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:00.582473   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:00.609898   51251 cri.go:89] found id: ""
	I1018 17:47:00.609973   51251 logs.go:282] 0 containers: []
	W1018 17:47:00.610004   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:00.610017   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:00.610086   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:00.637367   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:00.637388   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:00.637393   51251 cri.go:89] found id: ""
	I1018 17:47:00.637400   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:00.637464   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:00.641319   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:00.644789   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:00.644895   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:00.672435   51251 cri.go:89] found id: ""
	I1018 17:47:00.672467   51251 logs.go:282] 0 containers: []
	W1018 17:47:00.672476   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:00.672498   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:00.672580   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:00.699455   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:00.699483   51251 cri.go:89] found id: ""
	I1018 17:47:00.699492   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:00.699583   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:00.703264   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:00.703360   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:00.728880   51251 cri.go:89] found id: ""
	I1018 17:47:00.728902   51251 logs.go:282] 0 containers: []
	W1018 17:47:00.728909   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:00.728919   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:00.728930   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:00.823491   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:00.823527   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:00.902015   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:00.902048   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:00.934461   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:00.934491   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:00.946667   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:00.946693   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:01.028399   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:01.020279   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.020921   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.022494   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.023037   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.024610   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:01.020279   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.020921   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.022494   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.023037   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.024610   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:01.028462   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:01.028491   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:01.054806   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:01.054833   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:01.113787   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:01.113863   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:01.158354   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:01.158386   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:01.240342   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:01.240377   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:01.271277   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:01.271308   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:03.801529   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:03.812492   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:03.812565   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:03.840023   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:03.840046   51251 cri.go:89] found id: ""
	I1018 17:47:03.840054   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:03.840107   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:03.844123   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:03.844199   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:03.871286   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:03.871312   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:03.871317   51251 cri.go:89] found id: ""
	I1018 17:47:03.871325   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:03.871393   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:03.875415   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:03.879340   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:03.879454   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:03.907561   51251 cri.go:89] found id: ""
	I1018 17:47:03.907586   51251 logs.go:282] 0 containers: []
	W1018 17:47:03.907595   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:03.907602   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:03.907685   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:03.933344   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:03.933418   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:03.933445   51251 cri.go:89] found id: ""
	I1018 17:47:03.933467   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:03.933532   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:03.937202   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:03.940624   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:03.940692   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:03.976333   51251 cri.go:89] found id: ""
	I1018 17:47:03.976360   51251 logs.go:282] 0 containers: []
	W1018 17:47:03.976369   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:03.976375   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:03.976431   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:04.003969   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:04.003993   51251 cri.go:89] found id: ""
	I1018 17:47:04.004002   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:04.004073   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:04.008851   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:04.008931   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:04.043815   51251 cri.go:89] found id: ""
	I1018 17:47:04.043837   51251 logs.go:282] 0 containers: []
	W1018 17:47:04.043845   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:04.043854   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:04.043866   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:04.103935   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:04.103972   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:04.197102   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:04.197140   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:04.232873   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:04.232903   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:04.308823   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:04.308859   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:04.340563   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:04.340591   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:04.411725   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:04.402979   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.403733   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.405382   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.405957   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.407619   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:04.402979   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.403733   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.405382   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.405957   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.407619   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:04.411746   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:04.411758   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:04.436986   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:04.437017   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:04.474563   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:04.474599   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:04.508182   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:04.508207   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:04.612203   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:04.612245   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:07.124391   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:07.136931   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:07.137030   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:07.162931   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:07.162951   51251 cri.go:89] found id: ""
	I1018 17:47:07.162960   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:07.163014   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:07.166802   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:07.166873   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:07.194647   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:07.194666   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:07.194671   51251 cri.go:89] found id: ""
	I1018 17:47:07.194679   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:07.194732   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:07.198306   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:07.202321   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:07.202393   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:07.236779   51251 cri.go:89] found id: ""
	I1018 17:47:07.236804   51251 logs.go:282] 0 containers: []
	W1018 17:47:07.236813   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:07.236819   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:07.236876   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:07.266781   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:07.266801   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:07.266806   51251 cri.go:89] found id: ""
	I1018 17:47:07.266813   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:07.266867   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:07.270559   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:07.275186   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:07.275286   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:07.304386   51251 cri.go:89] found id: ""
	I1018 17:47:07.304423   51251 logs.go:282] 0 containers: []
	W1018 17:47:07.304454   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:07.304462   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:07.304540   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:07.333196   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:07.333220   51251 cri.go:89] found id: ""
	I1018 17:47:07.333228   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:07.333322   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:07.338348   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:07.338462   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:07.366271   51251 cri.go:89] found id: ""
	I1018 17:47:07.366343   51251 logs.go:282] 0 containers: []
	W1018 17:47:07.366364   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:07.366379   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:07.366391   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:07.468507   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:07.468585   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:07.529687   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:07.529725   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:07.565649   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:07.565779   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:07.596211   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:07.596237   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:07.615230   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:07.615299   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:07.692829   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:07.685395   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.685775   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.687235   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.687549   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.689030   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:07.685395   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.685775   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.687235   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.687549   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.689030   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:07.692899   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:07.692930   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:07.718952   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:07.719025   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:07.795561   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:07.795598   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:07.824250   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:07.824280   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:07.906836   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:07.906868   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:10.439981   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:10.451479   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:10.451545   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:10.480101   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:10.480123   51251 cri.go:89] found id: ""
	I1018 17:47:10.480132   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:10.480190   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:10.483904   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:10.484019   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:10.514873   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:10.514897   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:10.514902   51251 cri.go:89] found id: ""
	I1018 17:47:10.514910   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:10.514966   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:10.518574   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:10.522267   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:10.522379   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:10.550236   51251 cri.go:89] found id: ""
	I1018 17:47:10.550300   51251 logs.go:282] 0 containers: []
	W1018 17:47:10.550324   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:10.550343   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:10.550419   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:10.576542   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:10.576564   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:10.576569   51251 cri.go:89] found id: ""
	I1018 17:47:10.576576   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:10.576631   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:10.580343   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:10.583810   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:10.583876   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:10.608923   51251 cri.go:89] found id: ""
	I1018 17:47:10.608997   51251 logs.go:282] 0 containers: []
	W1018 17:47:10.609009   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:10.609016   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:10.609083   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:10.640901   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:10.640997   51251 cri.go:89] found id: ""
	I1018 17:47:10.641019   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:10.641104   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:10.644777   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:10.644898   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:10.686801   51251 cri.go:89] found id: ""
	I1018 17:47:10.686867   51251 logs.go:282] 0 containers: []
	W1018 17:47:10.686888   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:10.686902   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:10.686913   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:10.790476   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:10.790513   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:10.866774   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:10.866808   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:10.896066   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:10.896092   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:10.977137   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:10.977170   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:11.028633   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:11.028664   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:11.040841   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:11.040870   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:11.108732   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:11.100472   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.101171   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.102909   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.103502   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.105204   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:11.100472   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.101171   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.102909   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.103502   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.105204   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:11.108754   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:11.108767   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:11.142956   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:11.142982   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:11.203085   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:11.203120   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:11.245548   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:11.245582   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:13.780727   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:13.792098   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:13.792166   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:13.819543   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:13.819564   51251 cri.go:89] found id: ""
	I1018 17:47:13.819571   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:13.819627   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:13.823882   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:13.823951   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:13.849465   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:13.849495   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:13.849501   51251 cri.go:89] found id: ""
	I1018 17:47:13.849508   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:13.849563   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:13.853400   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:13.856833   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:13.856907   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:13.886459   51251 cri.go:89] found id: ""
	I1018 17:47:13.886482   51251 logs.go:282] 0 containers: []
	W1018 17:47:13.886502   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:13.886509   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:13.886576   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:13.914771   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:13.914840   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:13.914859   51251 cri.go:89] found id: ""
	I1018 17:47:13.914884   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:13.914961   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:13.919618   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:13.923284   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:13.923358   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:13.970811   51251 cri.go:89] found id: ""
	I1018 17:47:13.970833   51251 logs.go:282] 0 containers: []
	W1018 17:47:13.970841   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:13.970848   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:13.970905   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:13.997307   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:13.997333   51251 cri.go:89] found id: ""
	I1018 17:47:13.997341   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:13.997406   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:14.001258   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:14.001421   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:14.031834   51251 cri.go:89] found id: ""
	I1018 17:47:14.031908   51251 logs.go:282] 0 containers: []
	W1018 17:47:14.031930   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:14.031952   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:14.031991   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:14.115427   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:14.115472   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:14.155640   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:14.155675   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:14.260678   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:14.260712   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:14.299224   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:14.299256   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:14.328160   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:14.328189   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:14.402362   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:14.402396   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:14.436253   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:14.436279   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:14.448030   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:14.448054   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:14.523971   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:14.516092   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.516475   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.517978   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.518298   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.519757   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:14.516092   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.516475   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.517978   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.518298   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.519757   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:14.523992   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:14.524003   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:14.553496   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:14.553520   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:17.135556   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:17.147008   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:17.147074   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:17.173389   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:17.173409   51251 cri.go:89] found id: ""
	I1018 17:47:17.173417   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:17.173471   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:17.177579   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:17.177651   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:17.203627   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:17.203645   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:17.203650   51251 cri.go:89] found id: ""
	I1018 17:47:17.203657   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:17.203710   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:17.207344   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:17.217855   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:17.217930   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:17.249063   51251 cri.go:89] found id: ""
	I1018 17:47:17.249089   51251 logs.go:282] 0 containers: []
	W1018 17:47:17.249098   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:17.249105   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:17.249168   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:17.277163   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:17.277181   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:17.277186   51251 cri.go:89] found id: ""
	I1018 17:47:17.277193   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:17.277248   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:17.282612   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:17.286495   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:17.286569   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:17.319307   51251 cri.go:89] found id: ""
	I1018 17:47:17.319375   51251 logs.go:282] 0 containers: []
	W1018 17:47:17.319398   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:17.319410   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:17.319486   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:17.346484   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:17.346554   51251 cri.go:89] found id: ""
	I1018 17:47:17.346580   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:17.346657   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:17.350475   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:17.350550   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:17.377839   51251 cri.go:89] found id: ""
	I1018 17:47:17.377902   51251 logs.go:282] 0 containers: []
	W1018 17:47:17.377922   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:17.377931   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:17.377943   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:17.404392   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:17.404417   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:17.465336   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:17.465374   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:17.544540   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:17.544575   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:17.578410   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:17.578440   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:17.622849   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:17.622874   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:17.651286   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:17.651315   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:17.729896   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:17.729933   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:17.762097   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:17.762131   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:17.860291   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:17.860324   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:17.873306   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:17.873333   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:17.956831   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:17.948399   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.948817   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.950652   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.951205   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.953012   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:17.948399   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.948817   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.950652   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.951205   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.953012   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:20.457766   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:20.468306   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:20.468375   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:20.502498   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:20.502519   51251 cri.go:89] found id: ""
	I1018 17:47:20.502527   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:20.502581   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:20.506455   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:20.506526   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:20.533813   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:20.533831   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:20.533836   51251 cri.go:89] found id: ""
	I1018 17:47:20.533844   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:20.533897   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:20.537754   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:20.541481   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:20.541549   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:20.567040   51251 cri.go:89] found id: ""
	I1018 17:47:20.567063   51251 logs.go:282] 0 containers: []
	W1018 17:47:20.567071   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:20.567078   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:20.567139   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:20.596640   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:20.596661   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:20.596666   51251 cri.go:89] found id: ""
	I1018 17:47:20.596674   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:20.596729   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:20.600667   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:20.604504   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:20.604571   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:20.636801   51251 cri.go:89] found id: ""
	I1018 17:47:20.636826   51251 logs.go:282] 0 containers: []
	W1018 17:47:20.636835   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:20.636841   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:20.636919   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:20.663088   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:20.663107   51251 cri.go:89] found id: ""
	I1018 17:47:20.663120   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:20.663175   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:20.666758   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:20.666830   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:20.693183   51251 cri.go:89] found id: ""
	I1018 17:47:20.693205   51251 logs.go:282] 0 containers: []
	W1018 17:47:20.693214   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:20.693223   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:20.693233   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:20.759707   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:20.751450   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.752024   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.753590   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.754259   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.755733   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:20.751450   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.752024   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.753590   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.754259   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.755733   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:20.759728   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:20.759743   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:20.820356   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:20.820393   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:20.855109   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:20.855142   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:20.933430   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:20.933470   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:20.961931   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:20.961959   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:21.002517   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:21.002558   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:21.019433   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:21.019511   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:21.047420   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:21.047495   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:21.079819   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:21.079893   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:21.155722   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:21.155759   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:23.766139   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:23.777085   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:23.777151   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:23.811684   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:23.811707   51251 cri.go:89] found id: ""
	I1018 17:47:23.811715   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:23.811770   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:23.817453   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:23.817525   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:23.844121   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:23.844141   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:23.844146   51251 cri.go:89] found id: ""
	I1018 17:47:23.844153   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:23.844213   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:23.847866   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:23.851438   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:23.851510   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:23.879002   51251 cri.go:89] found id: ""
	I1018 17:47:23.879067   51251 logs.go:282] 0 containers: []
	W1018 17:47:23.879082   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:23.879089   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:23.879148   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:23.905700   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:23.905722   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:23.905727   51251 cri.go:89] found id: ""
	I1018 17:47:23.905735   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:23.905838   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:23.909628   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:23.913950   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:23.914019   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:23.946272   51251 cri.go:89] found id: ""
	I1018 17:47:23.946347   51251 logs.go:282] 0 containers: []
	W1018 17:47:23.946362   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:23.946370   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:23.946428   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:23.982078   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:23.982100   51251 cri.go:89] found id: ""
	I1018 17:47:23.982109   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:23.982162   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:23.985823   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:23.985895   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:24.020838   51251 cri.go:89] found id: ""
	I1018 17:47:24.020863   51251 logs.go:282] 0 containers: []
	W1018 17:47:24.020872   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:24.020881   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:24.020895   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:24.049680   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:24.049704   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:24.114947   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:24.114984   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:24.157780   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:24.157811   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:24.187365   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:24.187391   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:24.272125   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:24.264460   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.265126   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.266121   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.266734   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.268444   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:24.264460   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.265126   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.266121   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.266734   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.268444   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:24.272150   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:24.272162   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:24.351210   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:24.351246   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:24.379627   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:24.379654   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:24.459957   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:24.459991   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:24.490809   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:24.490834   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:24.594421   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:24.594457   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:27.106652   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:27.118797   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:27.118867   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:27.156694   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:27.156714   51251 cri.go:89] found id: ""
	I1018 17:47:27.156723   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:27.156776   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:27.160480   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:27.160550   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:27.187759   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:27.187780   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:27.187785   51251 cri.go:89] found id: ""
	I1018 17:47:27.187793   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:27.187855   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:27.191713   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:27.195093   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:27.195159   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:27.231641   51251 cri.go:89] found id: ""
	I1018 17:47:27.231663   51251 logs.go:282] 0 containers: []
	W1018 17:47:27.231671   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:27.231681   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:27.231737   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:27.259596   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:27.259614   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:27.259619   51251 cri.go:89] found id: ""
	I1018 17:47:27.259626   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:27.259678   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:27.263281   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:27.266728   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:27.266826   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:27.294104   51251 cri.go:89] found id: ""
	I1018 17:47:27.294127   51251 logs.go:282] 0 containers: []
	W1018 17:47:27.294139   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:27.294145   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:27.294205   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:27.321776   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:27.321798   51251 cri.go:89] found id: ""
	I1018 17:47:27.321806   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:27.321868   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:27.325558   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:27.325631   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:27.356639   51251 cri.go:89] found id: ""
	I1018 17:47:27.356666   51251 logs.go:282] 0 containers: []
	W1018 17:47:27.356674   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:27.356683   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:27.356694   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:27.462575   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:27.462610   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:27.529536   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:27.520733   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.521424   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.523093   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.523552   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.525157   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:27.520733   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.521424   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.523093   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.523552   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.525157   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:27.529559   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:27.529573   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:27.555154   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:27.555180   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:27.632084   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:27.632117   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:27.662590   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:27.662614   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:27.691692   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:27.691718   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:27.774358   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:27.774393   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:27.825515   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:27.825545   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:27.838343   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:27.838369   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:27.902992   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:27.903025   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:30.448737   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:30.460318   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:30.460398   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:30.488282   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:30.488306   51251 cri.go:89] found id: ""
	I1018 17:47:30.488314   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:30.488367   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:30.491908   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:30.491974   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:30.521041   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:30.521066   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:30.521071   51251 cri.go:89] found id: ""
	I1018 17:47:30.521079   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:30.521136   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:30.525103   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:30.528840   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:30.528916   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:30.562515   51251 cri.go:89] found id: ""
	I1018 17:47:30.562537   51251 logs.go:282] 0 containers: []
	W1018 17:47:30.562545   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:30.562551   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:30.562627   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:30.592562   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:30.592584   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:30.592589   51251 cri.go:89] found id: ""
	I1018 17:47:30.592596   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:30.592653   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:30.596706   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:30.600570   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:30.600692   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:30.627771   51251 cri.go:89] found id: ""
	I1018 17:47:30.627793   51251 logs.go:282] 0 containers: []
	W1018 17:47:30.627802   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:30.627808   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:30.627867   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:30.654477   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:30.654497   51251 cri.go:89] found id: ""
	I1018 17:47:30.654510   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:30.654565   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:30.658617   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:30.658686   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:30.689627   51251 cri.go:89] found id: ""
	I1018 17:47:30.689650   51251 logs.go:282] 0 containers: []
	W1018 17:47:30.689658   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:30.689667   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:30.689684   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:30.721050   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:30.721077   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:30.732370   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:30.732446   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:30.805446   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:30.796158   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.796640   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.798623   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.799026   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.800608   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:30.796158   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.796640   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.798623   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.799026   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.800608   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:30.805466   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:30.805478   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:30.830998   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:30.831024   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:30.906775   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:30.906811   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:30.940644   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:30.940671   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:31.026053   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:31.026089   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:31.137923   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:31.137966   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:31.233631   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:31.233668   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:31.264350   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:31.264374   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:33.793612   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:33.805648   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:33.805780   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:33.839954   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:33.840025   51251 cri.go:89] found id: ""
	I1018 17:47:33.840058   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:33.840138   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:33.844129   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:33.844243   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:33.871384   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:33.871408   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:33.871413   51251 cri.go:89] found id: ""
	I1018 17:47:33.871421   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:33.871476   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:33.875651   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:33.879420   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:33.879516   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:33.905649   51251 cri.go:89] found id: ""
	I1018 17:47:33.905676   51251 logs.go:282] 0 containers: []
	W1018 17:47:33.905684   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:33.905691   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:33.905749   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:33.934660   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:33.934683   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:33.934688   51251 cri.go:89] found id: ""
	I1018 17:47:33.934696   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:33.934780   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:33.938842   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:33.942670   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:33.942738   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:33.978544   51251 cri.go:89] found id: ""
	I1018 17:47:33.978568   51251 logs.go:282] 0 containers: []
	W1018 17:47:33.978576   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:33.978582   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:33.978643   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:34.012312   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:34.012389   51251 cri.go:89] found id: ""
	I1018 17:47:34.012468   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:34.012564   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:34.016868   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:34.017048   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:34.044577   51251 cri.go:89] found id: ""
	I1018 17:47:34.044648   51251 logs.go:282] 0 containers: []
	W1018 17:47:34.044668   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:34.044692   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:34.044729   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:34.072731   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:34.072799   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:34.103949   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:34.103978   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:34.117148   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:34.117176   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:34.197560   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:34.184268   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.184883   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.186363   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.186832   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.188578   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:34.184268   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.184883   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.186363   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.186832   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.188578   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:34.197584   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:34.197598   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:34.271679   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:34.271712   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:34.306656   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:34.306683   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:34.386272   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:34.386308   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:34.414077   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:34.414108   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:34.443807   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:34.443833   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:34.522683   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:34.522719   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:37.133400   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:37.147181   51251 out.go:203] 
	W1018 17:47:37.150020   51251 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1018 17:47:37.150063   51251 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1018 17:47:37.150073   51251 out.go:285] * Related issues:
	W1018 17:47:37.150088   51251 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1018 17:47:37.150102   51251 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1018 17:47:37.152991   51251 out.go:203] 
	
	
	==> CRI-O <==
	Oct 18 17:42:09 ha-181800 crio[664]: time="2025-10-18T17:42:09.20257717Z" level=info msg="Started container" PID=1382 containerID=20677c7e60d1996e5ef30701c2fa483c048319a013425dfed6187c287c0356bf description=kube-system/kindnet-72mvm/kindnet-cni id=83e6058c-c5b8-448d-b3d7-5186691986a4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9a75bfa4304b7995fa070b07859898cd617fcbbbf769fcdbda120cb3da5f1690
	Oct 18 17:42:09 ha-181800 crio[664]: time="2025-10-18T17:42:09.208099281Z" level=info msg="Started container" PID=1383 containerID=53b6059c5f00ad29bd734722047caa1917ada2ed5ac7284628e49ffa30dab92f description=kube-system/coredns-66bc5c9577-p7nbg/coredns id=943df95e-dbb8-484a-8f2a-243495bd2d36 name=/runtime.v1.RuntimeService/StartContainer sandboxID=399a3f557e994a4d64c7e77bfa57fcb97dec3f4f1b2ef3d5dcc06e92031fff33
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.111678023Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1ee0e455-5885-424a-be70-f38c74ac9b88 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.113151329Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7332cd08-d810-418f-9239-f994866438d4 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.115024796Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d765eb2e-c860-4fae-a3f2-643ee4144808 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.11532002Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.119986301Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.120167292Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/794fca1f203edd67ad13c746b10dd2dcd8837f7ca0cf411e1437cb8975c5cb1d/merged/etc/passwd: no such file or directory"
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.120189134Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/794fca1f203edd67ad13c746b10dd2dcd8837f7ca0cf411e1437cb8975c5cb1d/merged/etc/group: no such file or directory"
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.120431935Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.145840056Z" level=info msg="Created container a443aed43e21dadb519c5e91013a1d8eb554ae8abd04f5107863e313e372bdc7: kube-system/storage-provisioner/storage-provisioner" id=d765eb2e-c860-4fae-a3f2-643ee4144808 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.146767329Z" level=info msg="Starting container: a443aed43e21dadb519c5e91013a1d8eb554ae8abd04f5107863e313e372bdc7" id=7f29d364-0d5e-4652-9da1-74e15b27ef77 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.148484142Z" level=info msg="Started container" PID=1447 containerID=a443aed43e21dadb519c5e91013a1d8eb554ae8abd04f5107863e313e372bdc7 description=kube-system/storage-provisioner/storage-provisioner id=7f29d364-0d5e-4652-9da1-74e15b27ef77 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c018680cc61b2fa252ffde6cc7588c2be7ef28b3a444122d3feed4e3f9e480f5
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.512333091Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.516220368Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.516254731Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.516276286Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.51949706Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.51953286Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.519558739Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.523529282Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.52356175Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.523584117Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.526772128Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.526803677Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	a443aed43e21d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Running             storage-provisioner       1                   c018680cc61b2       storage-provisioner                 kube-system
	53b6059c5f00a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Running             coredns                   1                   399a3f557e994       coredns-66bc5c9577-p7nbg            kube-system
	20677c7e60d19       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 minutes ago       Running             kindnet-cni               1                   9a75bfa4304b7       kindnet-72mvm                       kube-system
	f24a57e28db5a       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   5 minutes ago       Running             busybox                   1                   5e71cad12b779       busybox-7b57f96db7-fbwpv            default
	2e4a1f13e1162       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 minutes ago       Running             kube-proxy                1                   7fecbfb4c17d9       kube-proxy-stgvm                    kube-system
	2c69476db7a72       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Running             coredns                   1                   578310fdfac47       coredns-66bc5c9577-f6v2w            kube-system
	96f0fa2b71bea       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Running             kube-controller-manager   4                   6555f89f5d7b8       kube-controller-manager-ha-181800   kube-system
	3c32a11f94c33       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Running             kube-apiserver            4                   e20726c2a8ebb       kube-apiserver-ha-181800            kube-system
	1ffdfbb5e9622       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Exited              kube-controller-manager   3                   6555f89f5d7b8       kube-controller-manager-ha-181800   kube-system
	933870b5e9434       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Exited              kube-apiserver            3                   e20726c2a8ebb       kube-apiserver-ha-181800            kube-system
	dda012a63c45a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   7 minutes ago       Running             etcd                      1                   41b759ba439df       etcd-ha-181800                      kube-system
	ac8ef32697a35       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   7 minutes ago       Running             kube-vip                  0                   a52c5b125e763       kube-vip-ha-181800                  kube-system
	6e9b6c2f0e69c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   7 minutes ago       Running             kube-scheduler            1                   44df15c75598f       kube-scheduler-ha-181800            kube-system
	
	
	==> coredns [2c69476db7a72cef87d583347c986806259d1f8ec4d34537de08f030eed150f5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54621 - 11724 "HINFO IN 6166212655013536567.4042456242834438062. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026635361s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [53b6059c5f00ad29bd734722047caa1917ada2ed5ac7284628e49ffa30dab92f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36574 - 3492 "HINFO IN 4503061436688671475.4348845373689282768. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02623671s
	
	
	==> describe nodes <==
	Name:               ha-181800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T17_33_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:33:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:47:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 17:46:57 +0000   Sat, 18 Oct 2025 17:33:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 17:46:57 +0000   Sat, 18 Oct 2025 17:33:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 17:46:57 +0000   Sat, 18 Oct 2025 17:33:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 17:46:57 +0000   Sat, 18 Oct 2025 17:34:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-181800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                7dc9b150-98ed-4d4d-b680-5759a1e067a9
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-fbwpv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-f6v2w             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 coredns-66bc5c9577-p7nbg             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-ha-181800                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-72mvm                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-181800             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-181800    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-stgvm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-181800             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-181800                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m40s                  kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)      kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           14m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-181800 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   RegisteredNode           8m27s                  node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   Starting                 7m53s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m53s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m53s (x8 over 7m53s)  kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m53s (x8 over 7m53s)  kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m53s (x8 over 7m53s)  kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m59s                  node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	
	
	Name:               ha-181800-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T17_34_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:34:02 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:39:26 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 18 Oct 2025 17:39:16 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 18 Oct 2025 17:39:16 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 18 Oct 2025 17:39:16 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 18 Oct 2025 17:39:16 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-181800-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                b2dd8f24-78e0-4eba-8b0c-b12412f7af7d
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-cp9q6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-181800-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-86s8z                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-181800-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-181800-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-dpwpn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-181800-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-181800-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 13m                kube-proxy       
	  Normal   RegisteredNode           13m                node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           12m                node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-181800-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-181800-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  10m (x9 over 10m)  kubelet          Node ha-181800-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeNotReady             9m50s              node-controller  Node ha-181800-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        9m16s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           8m27s              node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           5m59s              node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   NodeNotReady             5m9s               node-controller  Node ha-181800-m02 status is now: NodeNotReady
	
	
	Name:               ha-181800-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T17_35_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:35:18 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:39:12 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 18 Oct 2025 17:38:02 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 18 Oct 2025 17:38:02 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 18 Oct 2025 17:38:02 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 18 Oct 2025 17:38:02 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-181800-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                4a1abf8a-63a3-4737-81ec-1878616c489b
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-lzcbm                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-181800-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-9qbbw                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-181800-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-181800-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-qsqmb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-181800-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-181800-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        12m    kube-proxy       
	  Normal  RegisteredNode  12m    node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal  RegisteredNode  12m    node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal  RegisteredNode  12m    node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal  RegisteredNode  8m27s  node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal  RegisteredNode  5m59s  node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal  NodeNotReady    5m9s   node-controller  Node ha-181800-m03 status is now: NodeNotReady
	
	
	Name:               ha-181800-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T17_36_11_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:36:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:39:13 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 18 Oct 2025 17:38:23 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 18 Oct 2025 17:38:23 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 18 Oct 2025 17:38:23 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 18 Oct 2025 17:38:23 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-181800-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                afc79373-b3a1-4495-8f28-5c3685ad131e
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-88bv7       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-proxy-fj4ww    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    11m (x3 over 11m)  kubelet          Node ha-181800-m04 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m (x3 over 11m)  kubelet          Node ha-181800-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     11m (x3 over 11m)  kubelet          Node ha-181800-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-181800-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m27s              node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           5m59s              node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   NodeNotReady             5m9s               node-controller  Node ha-181800-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Oct18 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014995] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035776] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.808632] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.418900] kauditd_printk_skb: 36 callbacks suppressed
	[Oct18 17:12] overlayfs: idmapped layers are currently not supported
	[  +0.082393] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct18 17:18] overlayfs: idmapped layers are currently not supported
	[Oct18 17:19] overlayfs: idmapped layers are currently not supported
	[Oct18 17:33] overlayfs: idmapped layers are currently not supported
	[ +35.716082] overlayfs: idmapped layers are currently not supported
	[Oct18 17:35] overlayfs: idmapped layers are currently not supported
	[Oct18 17:36] overlayfs: idmapped layers are currently not supported
	[Oct18 17:37] overlayfs: idmapped layers are currently not supported
	[Oct18 17:39] overlayfs: idmapped layers are currently not supported
	[  +3.088699] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [dda012a63c45a5c37a124da696c59f0ac82f51c6728ee30f5a6b3a9df6f28b54] <==
	{"level":"warn","ts":"2025-10-18T17:47:46.709685Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:46.714869Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:46.717873Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:46.722241Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:46.731714Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:46.736664Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:46.740999Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:46.744611Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:46.748584Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:46.752262Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:46.763059Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:46.774598Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:46.778908Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:46.782223Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:46.787143Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"99f9e9c79f233aa7","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: i/o timeout"}
	{"level":"warn","ts":"2025-10-18T17:47:46.787200Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"99f9e9c79f233aa7","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: i/o timeout"}
	{"level":"warn","ts":"2025-10-18T17:47:46.788303Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:46.801126Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:46.810338Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:46.810557Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:46.814175Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:46.816843Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:46.821235Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:46.830238Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:46.840314Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:47:46 up  1:30,  0 user,  load average: 0.57, 0.92, 0.95
	Linux ha-181800 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [20677c7e60d1996e5ef30701c2fa483c048319a013425dfed6187c287c0356bf] <==
	I1018 17:47:09.509954       1 main.go:324] Node ha-181800-m04 has CIDR [10.244.3.0/24] 
	I1018 17:47:19.513044       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:47:19.513142       1 main.go:301] handling current node
	I1018 17:47:19.513180       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 17:47:19.513210       1 main.go:324] Node ha-181800-m02 has CIDR [10.244.1.0/24] 
	I1018 17:47:19.513410       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1018 17:47:19.513447       1 main.go:324] Node ha-181800-m03 has CIDR [10.244.2.0/24] 
	I1018 17:47:19.513554       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 17:47:19.513585       1 main.go:324] Node ha-181800-m04 has CIDR [10.244.3.0/24] 
	I1018 17:47:29.513013       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1018 17:47:29.513108       1 main.go:324] Node ha-181800-m03 has CIDR [10.244.2.0/24] 
	I1018 17:47:29.513281       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 17:47:29.513322       1 main.go:324] Node ha-181800-m04 has CIDR [10.244.3.0/24] 
	I1018 17:47:29.513420       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:47:29.513455       1 main.go:301] handling current node
	I1018 17:47:29.513491       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 17:47:29.513519       1 main.go:324] Node ha-181800-m02 has CIDR [10.244.1.0/24] 
	I1018 17:47:39.513042       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1018 17:47:39.513152       1 main.go:324] Node ha-181800-m03 has CIDR [10.244.2.0/24] 
	I1018 17:47:39.513341       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 17:47:39.513397       1 main.go:324] Node ha-181800-m04 has CIDR [10.244.3.0/24] 
	I1018 17:47:39.513497       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:47:39.513543       1 main.go:301] handling current node
	I1018 17:47:39.513578       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 17:47:39.513607       1 main.go:324] Node ha-181800-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [3c32a11f94c333ae590b8745e77ffbb92367453ca4e6aee44e0e906b14390ca9] <==
	I1018 17:41:42.012115       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 17:41:42.012379       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 17:41:42.012425       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 17:41:42.013814       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 17:41:42.013944       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 17:41:42.025145       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 17:41:42.025992       1 cache.go:39] Caches are synced for autoregister controller
	I1018 17:41:42.026156       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 17:41:42.026261       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 17:41:42.026295       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 17:41:42.026308       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 17:41:42.026410       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 17:41:42.027548       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 17:41:42.033558       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	W1018 17:41:42.048863       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1018 17:41:42.050261       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 17:41:42.067717       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1018 17:41:42.072232       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1018 17:41:42.729546       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1018 17:41:43.284542       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1018 17:41:45.808842       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 17:41:54.269828       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 17:41:54.405180       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 17:41:54.473862       1 controller.go:667] quota admission added evaluator for: deployments.apps
	W1018 17:42:03.284458       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	
	
	==> kube-apiserver [933870b5e943415b7ecac6fd98f8537b5e0e42b86569b4b7d319eff44a3da010] <==
	I1018 17:40:52.195862       1 server.go:150] Version: v1.34.1
	I1018 17:40:52.195974       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1018 17:40:52.812771       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1018 17:40:52.812808       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1018 17:40:52.812818       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1018 17:40:52.812823       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1018 17:40:52.812828       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1018 17:40:52.812832       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1018 17:40:52.812840       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1018 17:40:52.812844       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1018 17:40:52.812850       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1018 17:40:52.812854       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1018 17:40:52.812858       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1018 17:40:52.812862       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1018 17:40:52.829696       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 17:40:52.831179       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1018 17:40:52.831774       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1018 17:40:52.838589       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 17:40:52.845223       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1018 17:40:52.845250       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1018 17:40:52.845852       1 instance.go:239] Using reconciler: lease
	W1018 17:40:52.848887       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 17:41:12.829067       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 17:41:12.831182       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F1018 17:41:12.846964       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [1ffdfbb5e9622e4192714fed8bfa4ea7a73dcc053f130642d8e29a5c565ebea9] <==
	I1018 17:41:07.403597       1 serving.go:386] Generated self-signed cert in-memory
	I1018 17:41:08.625550       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1018 17:41:08.625581       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 17:41:08.627414       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1018 17:41:08.627750       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 17:41:08.627867       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 17:41:08.628008       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1018 17:41:23.855834       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-controller-manager [96f0fa2b71beaec136d643f232999f193a1e3a16d1ca723cfb31748694731abe] <==
	I1018 17:41:47.143192       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 17:41:47.146859       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 17:41:47.162191       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 17:41:47.167924       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 17:41:47.177964       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 17:41:47.178029       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 17:41:47.178094       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 17:41:47.178140       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 17:41:47.186626       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 17:41:47.187226       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 17:41:47.187330       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 17:41:47.187422       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800"
	I1018 17:41:47.187477       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800-m02"
	I1018 17:41:47.187509       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800-m03"
	I1018 17:41:47.187545       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800-m04"
	I1018 17:41:47.187570       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 17:41:47.188233       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 17:41:47.188405       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 17:41:47.187047       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 17:41:47.188792       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 17:41:47.189599       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 17:41:47.189657       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-181800-m04"
	I1018 17:41:47.193090       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 17:41:47.204060       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 17:42:37.382673       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="PartialDisruption"
	
	
	==> kube-proxy [2e4a1f13e11624e5f4250e6082edc23d03fdf1fc7644e45614e6cdfc5dd39e76] <==
	I1018 17:42:06.262094       1 server_linux.go:53] "Using iptables proxy"
	I1018 17:42:06.334558       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 17:42:06.434813       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 17:42:06.434860       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 17:42:06.434950       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 17:42:06.451883       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 17:42:06.451931       1 server_linux.go:132] "Using iptables Proxier"
	I1018 17:42:06.455099       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 17:42:06.455439       1 server.go:527] "Version info" version="v1.34.1"
	I1018 17:42:06.455461       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 17:42:06.457621       1 config.go:200] "Starting service config controller"
	I1018 17:42:06.457642       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 17:42:06.457661       1 config.go:106] "Starting endpoint slice config controller"
	I1018 17:42:06.457665       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 17:42:06.457677       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 17:42:06.457681       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 17:42:06.458386       1 config.go:309] "Starting node config controller"
	I1018 17:42:06.458405       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 17:42:06.458412       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 17:42:06.558355       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 17:42:06.558395       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 17:42:06.558458       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [6e9b6c2f0e69c56776af6be092e8313aef540b7319fd0664f3eb3f947353a66b] <==
	E1018 17:41:07.266841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 17:41:07.311343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 17:41:07.533447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 17:41:07.651007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 17:41:08.355495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 17:41:16.769551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 17:41:17.489724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 17:41:17.665056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 17:41:18.205960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 17:41:18.570146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 17:41:18.949283       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 17:41:21.873636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 17:41:21.969747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 17:41:22.140090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 17:41:23.503240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 17:41:24.328010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 17:41:25.411284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 17:41:25.991046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 17:41:26.048796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 17:41:27.484563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 17:41:28.014616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 17:41:28.168052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 17:41:29.601662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 17:41:31.989429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1018 17:42:01.134075       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.537384     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-p7nbg" podUID="9d361193-5b45-400e-8161-804fc30e7515"
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.541593     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-72mvm_kube-system(5edfc356-9d49-4895-b36a-06c2bd39155a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.541650     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-72mvm" podUID="5edfc356-9d49-4895-b36a-06c2bd39155a"
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.543446     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod busybox-7b57f96db7-fbwpv_default(58e37574-901f-46d4-bb33-2d0f7ae9c08c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.543484     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-7b57f96db7-fbwpv" podUID="58e37574-901f-46d4-bb33-2d0f7ae9c08c"
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.556129     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container storage-provisioner start failed in pod storage-provisioner_kube-system(3c6521cd-8e1b-46aa-96a3-39e475e1426c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.556318     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="3c6521cd-8e1b-46aa-96a3-39e475e1426c"
	Oct 18 17:41:54 ha-181800 kubelet[798]: W1018 17:41:54.573814     798 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/crio-578310fdfac473102b8772a2897f522e4e15e81fc4a884380a337b9e6d1aa5b2 WatchSource:0}: Error finding container 578310fdfac473102b8772a2897f522e4e15e81fc4a884380a337b9e6d1aa5b2: Status 404 returned error can't find the container with id 578310fdfac473102b8772a2897f522e4e15e81fc4a884380a337b9e6d1aa5b2
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.578568     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-f6v2w_kube-system(a1fbdf00-9636-43a5-b1ed-a98bcacb5537): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.578616     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-f6v2w" podUID="a1fbdf00-9636-43a5-b1ed-a98bcacb5537"
	Oct 18 17:41:55 ha-181800 kubelet[798]: I1018 17:41:55.114096     798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1a1eda2cde092be2eda0d8bef8f7ec3" path="/var/lib/kubelet/pods/a1a1eda2cde092be2eda0d8bef8f7ec3/volumes"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.433187     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-f6v2w_kube-system(a1fbdf00-9636-43a5-b1ed-a98bcacb5537): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.433245     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-f6v2w" podUID="a1fbdf00-9636-43a5-b1ed-a98bcacb5537"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.435023     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-p7nbg_kube-system(9d361193-5b45-400e-8161-804fc30e7515): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.435148     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-p7nbg" podUID="9d361193-5b45-400e-8161-804fc30e7515"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.441863     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod busybox-7b57f96db7-fbwpv_default(58e37574-901f-46d4-bb33-2d0f7ae9c08c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.441915     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-7b57f96db7-fbwpv" podUID="58e37574-901f-46d4-bb33-2d0f7ae9c08c"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.445392     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-proxy start failed in pod kube-proxy-stgvm_kube-system(15b89226-91ae-478f-acfe-7841776b1377): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.445443     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-stgvm" podUID="15b89226-91ae-478f-acfe-7841776b1377"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.450521     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-72mvm_kube-system(5edfc356-9d49-4895-b36a-06c2bd39155a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.450564     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-72mvm" podUID="5edfc356-9d49-4895-b36a-06c2bd39155a"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.458132     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container storage-provisioner start failed in pod storage-provisioner_kube-system(3c6521cd-8e1b-46aa-96a3-39e475e1426c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.458255     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="3c6521cd-8e1b-46aa-96a3-39e475e1426c"
	Oct 18 17:42:53 ha-181800 kubelet[798]: E1018 17:42:53.045182     798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f59bab0e4fe86f69836eb694e5d31105cca80fd917445482f23b6d46da571384\": container with ID starting with f59bab0e4fe86f69836eb694e5d31105cca80fd917445482f23b6d46da571384 not found: ID does not exist" containerID="f59bab0e4fe86f69836eb694e5d31105cca80fd917445482f23b6d46da571384"
	Oct 18 17:42:53 ha-181800 kubelet[798]: I1018 17:42:53.045240     798 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="f59bab0e4fe86f69836eb694e5d31105cca80fd917445482f23b6d46da571384" err="rpc error: code = NotFound desc = could not find container \"f59bab0e4fe86f69836eb694e5d31105cca80fd917445482f23b6d46da571384\": container with ID starting with f59bab0e4fe86f69836eb694e5d31105cca80fd917445482f23b6d46da571384 not found: ID does not exist"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-181800 -n ha-181800
helpers_test.go:269: (dbg) Run:  kubectl --context ha-181800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (5.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (6.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-181800" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-181800\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-181800\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-181800\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.49.4\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvid
ia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizat
ions\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-181800
helpers_test.go:243: (dbg) docker inspect ha-181800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2",
	        "Created": "2025-10-18T17:32:56.632116312Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 51376,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T17:39:46.245999615Z",
	            "FinishedAt": "2025-10-18T17:39:45.630064495Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/hostname",
	        "HostsPath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/hosts",
	        "LogPath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2-json.log",
	        "Name": "/ha-181800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-181800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-181800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2",
	                "LowerDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-181800",
	                "Source": "/var/lib/docker/volumes/ha-181800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-181800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-181800",
	                "name.minikube.sigs.k8s.io": "ha-181800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "efaac0f11b270c145ecb6a49cdddbc0cc50de47d14ed81303acfb3d93ecaef30",
	            "SandboxKey": "/var/run/docker/netns/efaac0f11b27",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32808"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32809"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32812"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32810"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32811"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-181800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:ba:f8:3c:6b:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "903568cdf824d38f52cb9a58c116a852c83eb599cf8cc87e25ba21b593e45142",
	                    "EndpointID": "af9b438a40e91de308acdf0827c862a018060c99dd48a4f5e67a2e361be9d341",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-181800",
	                        "5743bf3218eb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-181800 -n ha-181800
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-181800 logs -n 25: (2.406825593s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-181800 ssh -n ha-181800-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m02 sudo cat /home/docker/cp-test_ha-181800-m03_ha-181800-m02.txt                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m03:/home/docker/cp-test.txt ha-181800-m04:/home/docker/cp-test_ha-181800-m03_ha-181800-m04.txt               │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test_ha-181800-m03_ha-181800-m04.txt                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp testdata/cp-test.txt ha-181800-m04:/home/docker/cp-test.txt                                                             │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1463328482/001/cp-test_ha-181800-m04.txt │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt ha-181800:/home/docker/cp-test_ha-181800-m04_ha-181800.txt                       │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800 sudo cat /home/docker/cp-test_ha-181800-m04_ha-181800.txt                                                 │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt ha-181800-m02:/home/docker/cp-test_ha-181800-m04_ha-181800-m02.txt               │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m02 sudo cat /home/docker/cp-test_ha-181800-m04_ha-181800-m02.txt                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt ha-181800-m03:/home/docker/cp-test_ha-181800-m04_ha-181800-m03.txt               │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m03 sudo cat /home/docker/cp-test_ha-181800-m04_ha-181800-m03.txt                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ node    │ ha-181800 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ node    │ ha-181800 node start m02 --alsologtostderr -v 5                                                                                      │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:39 UTC │
	│ node    │ ha-181800 node list --alsologtostderr -v 5                                                                                           │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:39 UTC │                     │
	│ stop    │ ha-181800 stop --alsologtostderr -v 5                                                                                                │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:39 UTC │ 18 Oct 25 17:39 UTC │
	│ start   │ ha-181800 start --wait true --alsologtostderr -v 5                                                                                   │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:39 UTC │                     │
	│ node    │ ha-181800 node list --alsologtostderr -v 5                                                                                           │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:47 UTC │                     │
	│ node    │ ha-181800 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:47 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 17:39:45
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 17:39:45.975281   51251 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:39:45.975504   51251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:39:45.975531   51251 out.go:374] Setting ErrFile to fd 2...
	I1018 17:39:45.975549   51251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:39:45.975846   51251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:39:45.976262   51251 out.go:368] Setting JSON to false
	I1018 17:39:45.977169   51251 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4935,"bootTime":1760804251,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 17:39:45.977269   51251 start.go:141] virtualization:  
	I1018 17:39:45.980610   51251 out.go:179] * [ha-181800] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 17:39:45.984311   51251 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 17:39:45.984374   51251 notify.go:220] Checking for updates...
	I1018 17:39:45.990274   51251 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 17:39:45.993215   51251 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:39:45.996106   51251 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 17:39:45.999014   51251 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 17:39:46.004420   51251 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 17:39:46.008306   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:39:46.008436   51251 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 17:39:46.042019   51251 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 17:39:46.042131   51251 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:39:46.099091   51251 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 17:39:46.089556228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:39:46.099210   51251 docker.go:318] overlay module found
	I1018 17:39:46.102259   51251 out.go:179] * Using the docker driver based on existing profile
	I1018 17:39:46.105078   51251 start.go:305] selected driver: docker
	I1018 17:39:46.105099   51251 start.go:925] validating driver "docker" against &{Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:39:46.105237   51251 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 17:39:46.105338   51251 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:39:46.159602   51251 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 17:39:46.150874009 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:39:46.159982   51251 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:39:46.160020   51251 cni.go:84] Creating CNI manager for ""
	I1018 17:39:46.160080   51251 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1018 17:39:46.160126   51251 start.go:349] cluster config:
	{Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:39:46.165176   51251 out.go:179] * Starting "ha-181800" primary control-plane node in "ha-181800" cluster
	I1018 17:39:46.168051   51251 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:39:46.170939   51251 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:39:46.173836   51251 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:39:46.173896   51251 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 17:39:46.173911   51251 cache.go:58] Caching tarball of preloaded images
	I1018 17:39:46.173925   51251 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:39:46.173990   51251 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:39:46.174000   51251 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:39:46.174155   51251 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:39:46.192746   51251 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:39:46.192769   51251 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:39:46.192782   51251 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:39:46.192803   51251 start.go:360] acquireMachinesLock for ha-181800: {Name:mk3f5dfba2ab7d01f94f924dfcc5edab5f076901 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:39:46.192864   51251 start.go:364] duration metric: took 36.243µs to acquireMachinesLock for "ha-181800"
	I1018 17:39:46.192888   51251 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:39:46.192896   51251 fix.go:54] fixHost starting: 
	I1018 17:39:46.193211   51251 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:39:46.209470   51251 fix.go:112] recreateIfNeeded on ha-181800: state=Stopped err=<nil>
	W1018 17:39:46.209498   51251 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:39:46.212825   51251 out.go:252] * Restarting existing docker container for "ha-181800" ...
	I1018 17:39:46.212900   51251 cli_runner.go:164] Run: docker start ha-181800
	I1018 17:39:46.480673   51251 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:39:46.500591   51251 kic.go:430] container "ha-181800" state is running.
	I1018 17:39:46.501011   51251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:39:46.526396   51251 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:39:46.526638   51251 machine.go:93] provisionDockerMachine start ...
	I1018 17:39:46.526707   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:46.546472   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:46.546909   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1018 17:39:46.546927   51251 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:39:46.547526   51251 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 17:39:49.696893   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800
	
	I1018 17:39:49.696925   51251 ubuntu.go:182] provisioning hostname "ha-181800"
	I1018 17:39:49.697031   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:49.714524   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:49.714832   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1018 17:39:49.714849   51251 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181800 && echo "ha-181800" | sudo tee /etc/hostname
	I1018 17:39:49.873528   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800
	
	I1018 17:39:49.873612   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:49.891188   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:49.891504   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1018 17:39:49.891521   51251 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181800/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:39:50.037199   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:39:50.037228   51251 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:39:50.037247   51251 ubuntu.go:190] setting up certificates
	I1018 17:39:50.037257   51251 provision.go:84] configureAuth start
	I1018 17:39:50.037320   51251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:39:50.055129   51251 provision.go:143] copyHostCerts
	I1018 17:39:50.055181   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:39:50.055213   51251 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:39:50.055234   51251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:39:50.055314   51251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:39:50.055408   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:39:50.055430   51251 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:39:50.055438   51251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:39:50.055466   51251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:39:50.055525   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:39:50.055546   51251 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:39:50.055555   51251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:39:50.055581   51251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:39:50.055647   51251 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.ha-181800 san=[127.0.0.1 192.168.49.2 ha-181800 localhost minikube]
	I1018 17:39:50.382522   51251 provision.go:177] copyRemoteCerts
	I1018 17:39:50.382593   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:39:50.382633   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:50.403959   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:39:50.508789   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 17:39:50.508850   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:39:50.526450   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 17:39:50.526538   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1018 17:39:50.544187   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 17:39:50.544274   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:39:50.561987   51251 provision.go:87] duration metric: took 524.706666ms to configureAuth
	I1018 17:39:50.562063   51251 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:39:50.562317   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:39:50.562424   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:50.578939   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:50.579244   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1018 17:39:50.579264   51251 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:39:50.937128   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:39:50.937197   51251 machine.go:96] duration metric: took 4.410541s to provisionDockerMachine
	I1018 17:39:50.937222   51251 start.go:293] postStartSetup for "ha-181800" (driver="docker")
	I1018 17:39:50.937247   51251 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:39:50.937359   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:39:50.937444   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:50.959339   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:39:51.065300   51251 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:39:51.068761   51251 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:39:51.068792   51251 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:39:51.068803   51251 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:39:51.068858   51251 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:39:51.068963   51251 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:39:51.068976   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 17:39:51.069076   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 17:39:51.076928   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:39:51.094473   51251 start.go:296] duration metric: took 157.222631ms for postStartSetup
	I1018 17:39:51.094579   51251 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:39:51.094625   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:51.113220   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:39:51.213567   51251 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:39:51.218175   51251 fix.go:56] duration metric: took 5.025272015s for fixHost
	I1018 17:39:51.218200   51251 start.go:83] releasing machines lock for "ha-181800", held for 5.025323101s
	I1018 17:39:51.218283   51251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:39:51.235815   51251 ssh_runner.go:195] Run: cat /version.json
	I1018 17:39:51.235850   51251 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:39:51.235866   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:51.235904   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:39:51.261163   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:39:51.270603   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:39:51.360468   51251 ssh_runner.go:195] Run: systemctl --version
	I1018 17:39:51.454722   51251 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:39:51.498840   51251 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:39:51.503695   51251 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:39:51.503796   51251 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:39:51.511526   51251 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:39:51.511549   51251 start.go:495] detecting cgroup driver to use...
	I1018 17:39:51.511578   51251 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:39:51.511630   51251 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:39:51.526599   51251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:39:51.539484   51251 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:39:51.539576   51251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:39:51.554963   51251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:39:51.568183   51251 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:39:51.676636   51251 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:39:51.792230   51251 docker.go:234] disabling docker service ...
	I1018 17:39:51.792306   51251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:39:51.806847   51251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:39:51.819137   51251 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:39:51.938883   51251 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:39:52.058796   51251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:39:52.072487   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:39:52.088092   51251 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:39:52.088205   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.097568   51251 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:39:52.097729   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.107431   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.116597   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.125822   51251 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:39:52.134598   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.143667   51251 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.151898   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:39:52.160172   51251 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:39:52.167407   51251 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:39:52.174657   51251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:39:52.287403   51251 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:39:52.421729   51251 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:39:52.421850   51251 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:39:52.425707   51251 start.go:563] Will wait 60s for crictl version
	I1018 17:39:52.425813   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:39:52.429420   51251 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:39:52.453867   51251 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:39:52.453974   51251 ssh_runner.go:195] Run: crio --version
	I1018 17:39:52.486777   51251 ssh_runner.go:195] Run: crio --version
	I1018 17:39:52.520354   51251 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:39:52.523389   51251 cli_runner.go:164] Run: docker network inspect ha-181800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:39:52.539892   51251 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:39:52.543780   51251 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:39:52.553416   51251 kubeadm.go:883] updating cluster {Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 17:39:52.553576   51251 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:39:52.553634   51251 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 17:39:52.588251   51251 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 17:39:52.588276   51251 crio.go:433] Images already preloaded, skipping extraction
	I1018 17:39:52.588335   51251 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 17:39:52.613957   51251 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 17:39:52.613979   51251 cache_images.go:85] Images are preloaded, skipping loading
	I1018 17:39:52.613989   51251 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 17:39:52.614102   51251 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-181800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:39:52.614189   51251 ssh_runner.go:195] Run: crio config
	I1018 17:39:52.670252   51251 cni.go:84] Creating CNI manager for ""
	I1018 17:39:52.670275   51251 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1018 17:39:52.670294   51251 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 17:39:52.670319   51251 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-181800 NodeName:ha-181800 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 17:39:52.670455   51251 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-181800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 17:39:52.670475   51251 kube-vip.go:115] generating kube-vip config ...
	I1018 17:39:52.670529   51251 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 17:39:52.682279   51251 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:39:52.682377   51251 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 17:39:52.682436   51251 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:39:52.689950   51251 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:39:52.690041   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1018 17:39:52.697809   51251 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1018 17:39:52.710709   51251 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:39:52.723367   51251 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1018 17:39:52.735890   51251 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 17:39:52.748648   51251 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 17:39:52.752220   51251 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:39:52.762098   51251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:39:52.871320   51251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:39:52.886583   51251 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800 for IP: 192.168.49.2
	I1018 17:39:52.886603   51251 certs.go:195] generating shared ca certs ...
	I1018 17:39:52.886618   51251 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:39:52.886785   51251 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:39:52.886838   51251 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:39:52.886849   51251 certs.go:257] generating profile certs ...
	I1018 17:39:52.886923   51251 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key
	I1018 17:39:52.886953   51251 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.46a58690
	I1018 17:39:52.886970   51251 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt.46a58690 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1018 17:39:53.268315   51251 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt.46a58690 ...
	I1018 17:39:53.268348   51251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt.46a58690: {Name:mk0cc861493b9d286eed0bfb736b15e28a1706f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:39:53.268572   51251 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.46a58690 ...
	I1018 17:39:53.268589   51251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.46a58690: {Name:mk424cb4f615a1903e846801cb9cb2e734afdfb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:39:53.268677   51251 certs.go:382] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt.46a58690 -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt
	I1018 17:39:53.268822   51251 certs.go:386] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.46a58690 -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key
	I1018 17:39:53.268969   51251 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key
	I1018 17:39:53.268988   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 17:39:53.269005   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 17:39:53.269023   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 17:39:53.269043   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 17:39:53.269070   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 17:39:53.269094   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 17:39:53.269112   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 17:39:53.269123   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 17:39:53.269179   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:39:53.269213   51251 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:39:53.269225   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:39:53.269249   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:39:53.269273   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:39:53.269299   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:39:53.269346   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:39:53.269376   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 17:39:53.269392   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:39:53.269403   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 17:39:53.269946   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:39:53.289258   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:39:53.307330   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:39:53.325012   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:39:53.342168   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 17:39:53.359559   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 17:39:53.376235   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 17:39:53.393388   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 17:39:53.409944   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:39:53.427591   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:39:53.443532   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:39:53.459786   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 17:39:53.472627   51251 ssh_runner.go:195] Run: openssl version
	I1018 17:39:53.478997   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:39:53.486807   51251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:39:53.490229   51251 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:39:53.490289   51251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:39:53.534916   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:39:53.547040   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:39:53.561930   51251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:39:53.567602   51251 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:39:53.567707   51251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:39:53.617018   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:39:53.628559   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:39:53.641445   51251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:39:53.645568   51251 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:39:53.645680   51251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:39:53.715014   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:39:53.744004   51251 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:39:53.751940   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 17:39:53.829686   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 17:39:53.890601   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 17:39:53.957371   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 17:39:54.017003   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 17:39:54.064655   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 17:39:54.111921   51251 kubeadm.go:400] StartCluster: {Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:39:54.112099   51251 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:39:54.112174   51251 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:39:54.163162   51251 cri.go:89] found id: "dda012a63c45a5c37a124da696c59f0ac82f51c6728ee30f5a6b3a9df6f28b54"
	I1018 17:39:54.163230   51251 cri.go:89] found id: "ac8ef32697a356e273cd1b84ce23b6e628c802ef7b211f001fc50bb472635814"
	I1018 17:39:54.163250   51251 cri.go:89] found id: "4957aae3df6cdc996ba2129d1f43210ebdec1c480e6db0115ee34f32691af151"
	I1018 17:39:54.163265   51251 cri.go:89] found id: "6e9b6c2f0e69c56776af6be092e8313aef540b7319fd0664f3eb3f947353a66b"
	I1018 17:39:54.163282   51251 cri.go:89] found id: "a0776ff98d8411ec5ae52a11de472cb17e1d8c764d642bf18a22aec8b44a08ee"
	I1018 17:39:54.163300   51251 cri.go:89] found id: ""
	I1018 17:39:54.163370   51251 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 17:39:54.178952   51251 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:39:54Z" level=error msg="open /run/runc: no such file or directory"
	I1018 17:39:54.179088   51251 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 17:39:54.202035   51251 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 17:39:54.202104   51251 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 17:39:54.202180   51251 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 17:39:54.218306   51251 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:39:54.218743   51251 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-181800" does not appear in /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:39:54.218882   51251 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-2509/kubeconfig needs updating (will repair): [kubeconfig missing "ha-181800" cluster setting kubeconfig missing "ha-181800" context setting]
	I1018 17:39:54.219252   51251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:39:54.219794   51251 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 17:39:54.220519   51251 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 17:39:54.220606   51251 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 17:39:54.220635   51251 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 17:39:54.220585   51251 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1018 17:39:54.220726   51251 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 17:39:54.220753   51251 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 17:39:54.221075   51251 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 17:39:54.234375   51251 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1018 17:39:54.234436   51251 kubeadm.go:601] duration metric: took 32.30335ms to restartPrimaryControlPlane
	I1018 17:39:54.234460   51251 kubeadm.go:402] duration metric: took 122.54698ms to StartCluster
	I1018 17:39:54.234487   51251 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:39:54.234565   51251 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:39:54.235140   51251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:39:54.235365   51251 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:39:54.235417   51251 start.go:241] waiting for startup goroutines ...
	I1018 17:39:54.235446   51251 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 17:39:54.235957   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:39:54.241374   51251 out.go:179] * Enabled addons: 
	I1018 17:39:54.244317   51251 addons.go:514] duration metric: took 8.873213ms for enable addons: enabled=[]
	I1018 17:39:54.244381   51251 start.go:246] waiting for cluster config update ...
	I1018 17:39:54.244403   51251 start.go:255] writing updated cluster config ...
	I1018 17:39:54.247646   51251 out.go:203] 
	I1018 17:39:54.250620   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:39:54.250787   51251 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:39:54.254182   51251 out.go:179] * Starting "ha-181800-m02" control-plane node in "ha-181800" cluster
	I1018 17:39:54.257073   51251 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:39:54.259992   51251 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:39:54.262894   51251 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:39:54.262941   51251 cache.go:58] Caching tarball of preloaded images
	I1018 17:39:54.263061   51251 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:39:54.263094   51251 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:39:54.263229   51251 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:39:54.263458   51251 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:39:54.291252   51251 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:39:54.291269   51251 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:39:54.291282   51251 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:39:54.291303   51251 start.go:360] acquireMachinesLock for ha-181800-m02: {Name:mk36a488c0fbfc8557c6ba291b969aad85b45635 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:39:54.291352   51251 start.go:364] duration metric: took 33.977µs to acquireMachinesLock for "ha-181800-m02"
	I1018 17:39:54.291370   51251 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:39:54.291375   51251 fix.go:54] fixHost starting: m02
	I1018 17:39:54.291629   51251 cli_runner.go:164] Run: docker container inspect ha-181800-m02 --format={{.State.Status}}
	I1018 17:39:54.318512   51251 fix.go:112] recreateIfNeeded on ha-181800-m02: state=Stopped err=<nil>
	W1018 17:39:54.318536   51251 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:39:54.321781   51251 out.go:252] * Restarting existing docker container for "ha-181800-m02" ...
	I1018 17:39:54.321859   51251 cli_runner.go:164] Run: docker start ha-181800-m02
	I1018 17:39:54.692758   51251 cli_runner.go:164] Run: docker container inspect ha-181800-m02 --format={{.State.Status}}
	I1018 17:39:54.723920   51251 kic.go:430] container "ha-181800-m02" state is running.
	I1018 17:39:54.724263   51251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:39:54.749215   51251 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:39:54.749467   51251 machine.go:93] provisionDockerMachine start ...
	I1018 17:39:54.749523   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:39:54.781536   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:54.781830   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1018 17:39:54.781839   51251 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:39:54.782427   51251 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39794->127.0.0.1:32813: read: connection reset by peer
	I1018 17:39:58.082162   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m02
	
	I1018 17:39:58.082184   51251 ubuntu.go:182] provisioning hostname "ha-181800-m02"
	I1018 17:39:58.082261   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:39:58.126530   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:58.126844   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1018 17:39:58.126855   51251 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181800-m02 && echo "ha-181800-m02" | sudo tee /etc/hostname
	I1018 17:39:58.443573   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m02
	
	I1018 17:39:58.443690   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:39:58.478907   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:58.479213   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1018 17:39:58.479243   51251 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181800-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181800-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181800-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:39:58.737653   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:39:58.737680   51251 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:39:58.737725   51251 ubuntu.go:190] setting up certificates
	I1018 17:39:58.737736   51251 provision.go:84] configureAuth start
	I1018 17:39:58.737818   51251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:39:58.774675   51251 provision.go:143] copyHostCerts
	I1018 17:39:58.774718   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:39:58.774757   51251 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:39:58.774769   51251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:39:58.774848   51251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:39:58.774946   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:39:58.774970   51251 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:39:58.774977   51251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:39:58.775018   51251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:39:58.775074   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:39:58.775100   51251 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:39:58.775109   51251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:39:58.775135   51251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:39:58.775197   51251 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.ha-181800-m02 san=[127.0.0.1 192.168.49.3 ha-181800-m02 localhost minikube]
	I1018 17:39:59.196567   51251 provision.go:177] copyRemoteCerts
	I1018 17:39:59.197114   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:39:59.197174   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:39:59.222600   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:39:59.394297   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 17:39:59.394389   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:39:59.450203   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 17:39:59.450288   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 17:39:59.513512   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 17:39:59.513624   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:39:59.573995   51251 provision.go:87] duration metric: took 836.238905ms to configureAuth
	I1018 17:39:59.574021   51251 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:39:59.574290   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:39:59.574415   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:39:59.606597   51251 main.go:141] libmachine: Using SSH client type: native
	I1018 17:39:59.606908   51251 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1018 17:39:59.606927   51251 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:40:00.196427   51251 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:40:00.196520   51251 machine.go:96] duration metric: took 5.447042221s to provisionDockerMachine
	I1018 17:40:00.196547   51251 start.go:293] postStartSetup for "ha-181800-m02" (driver="docker")
	I1018 17:40:00.196572   51251 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:40:00.196694   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:40:00.196782   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:40:00.238873   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:40:00.392500   51251 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:40:00.403930   51251 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:40:00.403959   51251 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:40:00.403971   51251 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:40:00.404043   51251 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:40:00.404125   51251 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:40:00.404133   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 17:40:00.404244   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 17:40:00.423321   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:40:00.459796   51251 start.go:296] duration metric: took 263.21852ms for postStartSetup
	I1018 17:40:00.459966   51251 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:40:00.460049   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:40:00.503330   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:40:00.631049   51251 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:40:00.645680   51251 fix.go:56] duration metric: took 6.354295561s for fixHost
	I1018 17:40:00.645709   51251 start.go:83] releasing machines lock for "ha-181800-m02", held for 6.35434937s
	I1018 17:40:00.645791   51251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:40:00.682830   51251 out.go:179] * Found network options:
	I1018 17:40:00.685894   51251 out.go:179]   - NO_PROXY=192.168.49.2
	W1018 17:40:00.688804   51251 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:40:00.688858   51251 proxy.go:120] fail to check proxy env: Error ip not in block
	I1018 17:40:00.688930   51251 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:40:00.689085   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:40:00.689351   51251 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:40:00.689409   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:40:00.730142   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:40:00.730174   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:40:01.294197   51251 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:40:01.312592   51251 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:40:01.312744   51251 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:40:01.330228   51251 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:40:01.330302   51251 start.go:495] detecting cgroup driver to use...
	I1018 17:40:01.330348   51251 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:40:01.330425   51251 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:40:01.357073   51251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:40:01.416356   51251 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:40:01.416475   51251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:40:01.453551   51251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:40:01.481435   51251 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:40:01.742441   51251 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:40:01.978817   51251 docker.go:234] disabling docker service ...
	I1018 17:40:01.978936   51251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:40:02.001514   51251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:40:02.021678   51251 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:40:02.249968   51251 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:40:02.480556   51251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:40:02.498908   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:40:02.526424   51251 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:40:02.526493   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.542071   51251 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:40:02.542141   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.559770   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.574006   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.589455   51251 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:40:02.598587   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.612076   51251 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.624069   51251 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:40:02.637136   51251 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:40:02.652415   51251 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:40:02.662181   51251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:40:02.863894   51251 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:41:33.166156   51251 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.302227656s)
	I1018 17:41:33.166194   51251 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:41:33.166252   51251 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:41:33.170771   51251 start.go:563] Will wait 60s for crictl version
	I1018 17:41:33.170830   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:41:33.176098   51251 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:41:33.213255   51251 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:41:33.213351   51251 ssh_runner.go:195] Run: crio --version
	I1018 17:41:33.258540   51251 ssh_runner.go:195] Run: crio --version
	I1018 17:41:33.296286   51251 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:41:33.299353   51251 out.go:179]   - env NO_PROXY=192.168.49.2
	I1018 17:41:33.302220   51251 cli_runner.go:164] Run: docker network inspect ha-181800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:41:33.319775   51251 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:41:33.324290   51251 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:41:33.336317   51251 mustload.go:65] Loading cluster: ha-181800
	I1018 17:41:33.336557   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:41:33.336817   51251 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:41:33.362604   51251 host.go:66] Checking if "ha-181800" exists ...
	I1018 17:41:33.362892   51251 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800 for IP: 192.168.49.3
	I1018 17:41:33.362901   51251 certs.go:195] generating shared ca certs ...
	I1018 17:41:33.362915   51251 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:41:33.363034   51251 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:41:33.363081   51251 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:41:33.363088   51251 certs.go:257] generating profile certs ...
	I1018 17:41:33.363157   51251 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key
	I1018 17:41:33.363222   51251 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.887e0b27
	I1018 17:41:33.363266   51251 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key
	I1018 17:41:33.363274   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 17:41:33.363286   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 17:41:33.363296   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 17:41:33.363306   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 17:41:33.363316   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 17:41:33.363328   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 17:41:33.363338   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 17:41:33.363348   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 17:41:33.363398   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:41:33.363424   51251 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:41:33.363433   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:41:33.363455   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:41:33.363476   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:41:33.363496   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:41:33.363536   51251 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:41:33.363565   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:41:33.363579   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 17:41:33.363590   51251 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 17:41:33.363643   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:41:33.388336   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:41:33.489250   51251 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1018 17:41:33.493494   51251 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1018 17:41:33.511835   51251 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1018 17:41:33.515898   51251 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1018 17:41:33.524188   51251 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1018 17:41:33.527936   51251 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1018 17:41:33.536545   51251 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1018 17:41:33.540347   51251 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1018 17:41:33.549002   51251 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1018 17:41:33.552698   51251 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1018 17:41:33.561692   51251 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1018 17:41:33.565522   51251 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1018 17:41:33.574471   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:41:33.598033   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:41:33.620604   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:41:33.644520   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:41:33.671246   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 17:41:33.694599   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 17:41:33.716649   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 17:41:33.739805   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 17:41:33.761744   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:41:33.784279   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:41:33.807665   51251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:41:33.831497   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1018 17:41:33.845903   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1018 17:41:33.860149   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1018 17:41:33.874010   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1018 17:41:33.893500   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1018 17:41:33.908151   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1018 17:41:33.922971   51251 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1018 17:41:33.937486   51251 ssh_runner.go:195] Run: openssl version
	I1018 17:41:33.944301   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:41:33.953654   51251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:41:33.958036   51251 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:41:33.958171   51251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:41:34.004993   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:41:34.015337   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:41:34.024718   51251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:41:34.029508   51251 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:41:34.029667   51251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:41:34.076487   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:41:34.085949   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:41:34.095637   51251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:41:34.100153   51251 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:41:34.100269   51251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:41:34.148268   51251 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:41:34.158037   51251 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:41:34.162480   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 17:41:34.206936   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 17:41:34.251076   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 17:41:34.294598   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 17:41:34.337252   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 17:41:34.379050   51251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 17:41:34.422861   51251 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1018 17:41:34.423031   51251 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-181800-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:41:34.423078   51251 kube-vip.go:115] generating kube-vip config ...
	I1018 17:41:34.423166   51251 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 17:41:34.435895   51251 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:41:34.435996   51251 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 17:41:34.436081   51251 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:41:34.444655   51251 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:41:34.444772   51251 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1018 17:41:34.452743   51251 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 17:41:34.466348   51251 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:41:34.479899   51251 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 17:41:34.497063   51251 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 17:41:34.500892   51251 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:41:34.516267   51251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:41:34.674326   51251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:41:34.690850   51251 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:41:34.691288   51251 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:41:34.696864   51251 out.go:179] * Verifying Kubernetes components...
	I1018 17:41:34.699590   51251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:41:34.858485   51251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:41:34.875760   51251 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1018 17:41:34.876060   51251 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1018 17:41:34.876378   51251 node_ready.go:35] waiting up to 6m0s for node "ha-181800-m02" to be "Ready" ...
	I1018 17:41:41.842514   51251 node_ready.go:49] node "ha-181800-m02" is "Ready"
	I1018 17:41:41.842547   51251 node_ready.go:38] duration metric: took 6.966151068s for node "ha-181800-m02" to be "Ready" ...
	I1018 17:41:41.842561   51251 api_server.go:52] waiting for apiserver process to appear ...
	I1018 17:41:41.842620   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:42.343686   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:42.843043   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:43.343313   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:43.843326   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:44.343648   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:44.843315   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:45.342911   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:45.842777   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:46.343420   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:46.843693   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:47.342746   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:47.843464   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:48.342878   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:48.843391   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:49.342759   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:49.843483   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:50.342789   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:50.842761   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:51.342785   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:51.843356   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:52.342785   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:52.843177   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:53.342698   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:53.842872   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:54.343544   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:54.842904   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:55.343425   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:55.843434   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:56.343297   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:56.843518   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:57.343357   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:57.842816   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:58.343642   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:58.842783   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:59.343043   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:41:59.843412   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:00.342951   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:00.843389   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:01.342774   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:01.842787   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:02.343236   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:02.842685   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:03.342751   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:03.843695   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:04.342729   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:04.843543   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:05.343721   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:05.843447   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:06.342743   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:06.842790   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:07.343656   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:07.843541   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:08.343267   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:08.843707   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:09.342771   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:09.843748   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:10.342856   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:10.842752   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:11.343307   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:11.842677   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:12.343443   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:12.843733   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:13.343641   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:13.842734   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:14.343649   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:14.842779   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:15.342756   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:15.842763   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:16.343741   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:16.842779   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:17.342825   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:17.843340   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:18.342759   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:18.842772   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:19.342755   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:19.842777   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:20.343137   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:20.843594   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:21.343397   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:21.843388   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:22.342798   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:22.843107   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:23.343587   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:23.842910   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:24.343458   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:24.843264   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:25.342775   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:25.842894   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:26.343732   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:26.842775   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:27.342787   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:27.842760   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:28.342772   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:28.843266   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:29.343220   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:29.843228   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:30.343087   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:30.842732   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:31.342878   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:31.843084   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:32.343181   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:32.843480   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:33.343320   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:33.842755   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:34.342929   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:34.842842   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:34.842930   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:34.869988   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:34.870010   51251 cri.go:89] found id: ""
	I1018 17:42:34.870018   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:34.870073   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:34.873710   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:34.873778   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:34.899173   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:34.899196   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:34.899202   51251 cri.go:89] found id: ""
	I1018 17:42:34.899209   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:34.899263   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:34.903214   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:34.906828   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:34.906903   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:34.933625   51251 cri.go:89] found id: ""
	I1018 17:42:34.933648   51251 logs.go:282] 0 containers: []
	W1018 17:42:34.933656   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:34.933663   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:34.933723   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:34.959655   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:34.959675   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:34.959680   51251 cri.go:89] found id: ""
	I1018 17:42:34.959688   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:34.959743   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:34.972509   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:34.977434   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:34.977506   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:35.014139   51251 cri.go:89] found id: ""
	I1018 17:42:35.014165   51251 logs.go:282] 0 containers: []
	W1018 17:42:35.014173   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:35.014180   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:35.014287   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:35.047968   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:35.047993   51251 cri.go:89] found id: ""
	I1018 17:42:35.048002   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:35.048056   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:35.052096   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:35.052159   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:35.087604   51251 cri.go:89] found id: ""
	I1018 17:42:35.087628   51251 logs.go:282] 0 containers: []
	W1018 17:42:35.087636   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:35.087645   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:35.087658   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:35.135319   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:35.135352   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:35.186498   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:35.186531   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:35.217338   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:35.217381   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:35.327154   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:35.327184   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:35.341645   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:35.341672   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:35.747254   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:35.739248    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.739909    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.741574    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.742106    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.743686    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:35.739248    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.739909    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.741574    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.742106    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:35.743686    1479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:35.747277   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:35.747291   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:35.784796   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:35.784825   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:35.811760   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:35.811786   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:35.886991   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:35.887025   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:35.921904   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:35.921933   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:38.449291   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:38.459790   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:38.459857   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:38.486350   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:38.486373   51251 cri.go:89] found id: ""
	I1018 17:42:38.486383   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:38.486444   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:38.490359   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:38.490430   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:38.518049   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:38.518073   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:38.518078   51251 cri.go:89] found id: ""
	I1018 17:42:38.518097   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:38.518156   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:38.522183   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:38.526138   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:38.526213   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:38.552857   51251 cri.go:89] found id: ""
	I1018 17:42:38.552881   51251 logs.go:282] 0 containers: []
	W1018 17:42:38.552890   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:38.552896   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:38.552996   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:38.581427   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:38.581447   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:38.581452   51251 cri.go:89] found id: ""
	I1018 17:42:38.581460   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:38.581516   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:38.585308   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:38.588834   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:38.588907   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:38.626035   51251 cri.go:89] found id: ""
	I1018 17:42:38.626060   51251 logs.go:282] 0 containers: []
	W1018 17:42:38.626068   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:38.626074   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:38.626180   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:38.654519   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:38.654541   51251 cri.go:89] found id: ""
	I1018 17:42:38.654549   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:38.654606   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:38.659468   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:38.659536   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:38.685688   51251 cri.go:89] found id: ""
	I1018 17:42:38.685717   51251 logs.go:282] 0 containers: []
	W1018 17:42:38.685726   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:38.685735   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:38.685747   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:38.783795   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:38.783829   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:38.826341   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:38.826373   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:38.860295   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:38.860328   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:38.914363   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:38.914398   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:38.945563   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:38.945589   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:38.986953   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:38.986976   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:39.069689   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:39.069729   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:39.111763   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:39.111827   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:39.125634   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:39.125711   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:39.199836   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:39.189569    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.190870    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.192604    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.193407    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.194944    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:39.189569    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.190870    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.192604    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.193407    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:39.194944    1644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:39.199901   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:39.199927   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:41.727280   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:41.737746   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:41.737830   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:41.764569   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:41.764587   51251 cri.go:89] found id: ""
	I1018 17:42:41.764595   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:41.764651   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:41.768619   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:41.768692   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:41.795219   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:41.795239   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:41.795244   51251 cri.go:89] found id: ""
	I1018 17:42:41.795251   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:41.795315   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:41.799045   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:41.802635   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:41.802708   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:41.829223   51251 cri.go:89] found id: ""
	I1018 17:42:41.829246   51251 logs.go:282] 0 containers: []
	W1018 17:42:41.829256   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:41.829262   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:41.829319   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:41.863591   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:41.863612   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:41.863617   51251 cri.go:89] found id: ""
	I1018 17:42:41.863625   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:41.863708   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:41.867633   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:41.871288   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:41.871365   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:41.907130   51251 cri.go:89] found id: ""
	I1018 17:42:41.907154   51251 logs.go:282] 0 containers: []
	W1018 17:42:41.907162   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:41.907179   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:41.907239   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:41.937193   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:41.937215   51251 cri.go:89] found id: ""
	I1018 17:42:41.937223   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:41.937281   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:41.941168   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:41.941244   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:41.993845   51251 cri.go:89] found id: ""
	I1018 17:42:41.993923   51251 logs.go:282] 0 containers: []
	W1018 17:42:41.993944   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:41.993955   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:41.993967   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:42.041265   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:42.041296   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:42.070875   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:42.070904   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:42.106610   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:42.106642   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:42.194367   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:42.194403   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:42.229250   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:42.229279   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:42.283222   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:42.283254   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:42.343661   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:42.343694   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:42.376582   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:42.376608   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:42.475562   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:42.475597   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:42.488812   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:42.488842   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:42.564172   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:42.556222    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.556691    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.558297    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.558653    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.560347    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:42.556222    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.556691    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.558297    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.558653    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:42.560347    1787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:45.065078   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:45.086837   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:45.086979   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:45.165006   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:45.165027   51251 cri.go:89] found id: ""
	I1018 17:42:45.165035   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:45.165103   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:45.172323   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:45.172423   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:45.217483   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:45.217515   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:45.217521   51251 cri.go:89] found id: ""
	I1018 17:42:45.217530   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:45.217596   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:45.223128   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:45.227931   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:45.228025   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:45.283738   51251 cri.go:89] found id: ""
	I1018 17:42:45.283769   51251 logs.go:282] 0 containers: []
	W1018 17:42:45.283789   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:45.283818   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:45.283897   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:45.321652   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:45.321679   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:45.321685   51251 cri.go:89] found id: ""
	I1018 17:42:45.321694   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:45.321760   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:45.332292   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:45.337760   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:45.338055   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:45.381645   51251 cri.go:89] found id: ""
	I1018 17:42:45.381666   51251 logs.go:282] 0 containers: []
	W1018 17:42:45.381675   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:45.381681   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:45.381740   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:45.413702   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:45.413726   51251 cri.go:89] found id: ""
	I1018 17:42:45.413735   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:45.413793   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:45.417551   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:45.417654   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:45.444154   51251 cri.go:89] found id: ""
	I1018 17:42:45.444178   51251 logs.go:282] 0 containers: []
	W1018 17:42:45.444186   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:45.444195   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:45.444206   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:45.537154   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:45.537189   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:45.618318   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:45.608985    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.610405    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.610978    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.612722    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.613098    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:45.608985    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.610405    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.610978    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.612722    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:45.613098    1856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:45.618339   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:45.618352   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:45.643567   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:45.643592   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:45.680148   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:45.680183   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:45.732576   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:45.732648   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:45.763213   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:45.763299   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:45.790736   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:45.790804   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:45.802909   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:45.802991   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:45.850168   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:45.850251   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:45.926703   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:45.926741   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:48.486114   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:48.497086   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:48.497160   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:48.525605   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:48.525625   51251 cri.go:89] found id: ""
	I1018 17:42:48.525634   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:48.525690   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:48.529399   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:48.529536   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:48.556240   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:48.556261   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:48.556267   51251 cri.go:89] found id: ""
	I1018 17:42:48.556274   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:48.556331   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:48.560148   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:48.563747   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:48.563816   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:48.591484   51251 cri.go:89] found id: ""
	I1018 17:42:48.591509   51251 logs.go:282] 0 containers: []
	W1018 17:42:48.591518   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:48.591524   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:48.591584   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:48.621441   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:48.621461   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:48.621467   51251 cri.go:89] found id: ""
	I1018 17:42:48.621475   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:48.621531   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:48.625098   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:48.628679   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:48.628776   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:48.655455   51251 cri.go:89] found id: ""
	I1018 17:42:48.655477   51251 logs.go:282] 0 containers: []
	W1018 17:42:48.655486   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:48.655492   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:48.655574   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:48.686750   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:48.686773   51251 cri.go:89] found id: ""
	I1018 17:42:48.686781   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:48.686841   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:48.690841   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:48.690946   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:48.718158   51251 cri.go:89] found id: ""
	I1018 17:42:48.718186   51251 logs.go:282] 0 containers: []
	W1018 17:42:48.718194   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:48.718203   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:48.718213   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:48.823716   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:48.823756   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:48.901683   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:48.892565    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.893314    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.895024    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.895911    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.897573    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:48.892565    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.893314    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.895024    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.895911    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:48.897573    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:48.901743   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:48.901756   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:48.946710   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:48.946741   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:48.989214   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:48.989249   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:49.018928   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:49.018952   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:49.063728   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:49.063755   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:49.075796   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:49.075823   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:49.107128   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:49.107155   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:49.174004   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:49.174037   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:49.202814   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:49.202883   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:51.788673   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:51.804334   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:51.804402   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:51.832430   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:51.832451   51251 cri.go:89] found id: ""
	I1018 17:42:51.832459   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:51.832517   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:51.836251   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:51.836320   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:51.862897   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:51.862919   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:51.862924   51251 cri.go:89] found id: ""
	I1018 17:42:51.862931   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:51.862985   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:51.866673   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:51.870113   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:51.870200   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:51.895781   51251 cri.go:89] found id: ""
	I1018 17:42:51.895805   51251 logs.go:282] 0 containers: []
	W1018 17:42:51.895813   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:51.895820   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:51.895878   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:51.922494   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:51.922516   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:51.922521   51251 cri.go:89] found id: ""
	I1018 17:42:51.922528   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:51.922581   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:51.926209   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:51.929576   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:51.929673   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:51.956090   51251 cri.go:89] found id: ""
	I1018 17:42:51.956114   51251 logs.go:282] 0 containers: []
	W1018 17:42:51.956122   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:51.956129   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:51.956187   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:51.988490   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:51.988512   51251 cri.go:89] found id: ""
	I1018 17:42:51.988520   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:51.988574   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:51.992080   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:51.992159   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:52.021598   51251 cri.go:89] found id: ""
	I1018 17:42:52.021624   51251 logs.go:282] 0 containers: []
	W1018 17:42:52.021632   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:52.021642   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:52.021655   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:52.117617   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:52.117653   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:52.176829   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:52.177096   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:52.221507   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:52.221581   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:52.290597   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:52.290630   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:52.318933   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:52.318959   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:52.397646   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:52.397679   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:52.429557   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:52.429592   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:52.441410   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:52.441440   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:52.515237   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:52.505394    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.506908    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.507495    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.509107    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.509748    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:52.505394    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.506908    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.507495    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.509107    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:52.509748    2179 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:52.515259   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:52.515272   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:52.546325   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:52.546352   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:55.073960   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:55.087265   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:55.087396   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:55.118731   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:55.118751   51251 cri.go:89] found id: ""
	I1018 17:42:55.118760   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:55.118827   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:55.122773   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:55.122841   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:55.160245   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:55.160267   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:55.160284   51251 cri.go:89] found id: ""
	I1018 17:42:55.160293   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:55.160353   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:55.164073   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:55.167693   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:55.167805   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:55.194629   51251 cri.go:89] found id: ""
	I1018 17:42:55.194653   51251 logs.go:282] 0 containers: []
	W1018 17:42:55.194661   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:55.194668   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:55.194741   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:55.222517   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:55.222579   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:55.222590   51251 cri.go:89] found id: ""
	I1018 17:42:55.222599   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:55.222655   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:55.226357   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:55.230025   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:55.230092   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:55.263792   51251 cri.go:89] found id: ""
	I1018 17:42:55.263816   51251 logs.go:282] 0 containers: []
	W1018 17:42:55.263824   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:55.263830   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:55.263889   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:55.291220   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:55.291241   51251 cri.go:89] found id: ""
	I1018 17:42:55.291249   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:55.291325   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:55.294934   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:55.295010   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:55.326586   51251 cri.go:89] found id: ""
	I1018 17:42:55.326609   51251 logs.go:282] 0 containers: []
	W1018 17:42:55.326617   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:55.326654   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:55.326671   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:55.401452   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:55.392275    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.393074    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.393930    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.395756    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.396145    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:55.392275    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.393074    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.393930    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.395756    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:55.396145    2267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:55.401476   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:55.401489   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:55.447692   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:55.447728   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:55.491129   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:55.491159   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:42:55.568889   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:55.568926   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:55.604397   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:55.604423   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:55.621149   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:55.621188   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:55.649355   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:55.649383   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:55.703784   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:55.703820   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:55.742564   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:55.742592   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:55.771921   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:55.771952   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:58.379973   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:42:58.390987   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:42:58.391064   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:42:58.420177   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:58.420206   51251 cri.go:89] found id: ""
	I1018 17:42:58.420214   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:42:58.420280   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:58.423975   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:42:58.424051   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:42:58.450210   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:58.450232   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:58.450237   51251 cri.go:89] found id: ""
	I1018 17:42:58.450244   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:42:58.450302   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:58.454890   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:58.458701   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:42:58.458770   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:42:58.483310   51251 cri.go:89] found id: ""
	I1018 17:42:58.483334   51251 logs.go:282] 0 containers: []
	W1018 17:42:58.483342   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:42:58.483348   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:42:58.483405   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:42:58.511930   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:58.511958   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:58.511963   51251 cri.go:89] found id: ""
	I1018 17:42:58.511970   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:42:58.512025   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:58.515745   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:58.519340   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:42:58.519409   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:42:58.546212   51251 cri.go:89] found id: ""
	I1018 17:42:58.546233   51251 logs.go:282] 0 containers: []
	W1018 17:42:58.546250   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:42:58.546257   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:42:58.546336   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:42:58.573991   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:58.574011   51251 cri.go:89] found id: ""
	I1018 17:42:58.574019   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:42:58.574073   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:42:58.577989   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:42:58.578068   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:42:58.609463   51251 cri.go:89] found id: ""
	I1018 17:42:58.609485   51251 logs.go:282] 0 containers: []
	W1018 17:42:58.609493   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:42:58.609520   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:42:58.609542   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:42:58.623900   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:42:58.623929   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:42:58.672129   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:42:58.672159   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:42:58.702420   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:42:58.702447   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:42:58.739914   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:42:58.739941   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:42:58.840389   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:42:58.840423   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:42:58.904498   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:42:58.896431    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.896966    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.898915    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.899719    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.901011    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:42:58.896431    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.896966    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.898915    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.899719    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:42:58.901011    2440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:42:58.904519   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:42:58.904534   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:42:58.933888   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:42:58.933915   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:42:58.967554   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:42:58.967628   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:42:59.028427   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:42:59.028504   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:42:59.054221   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:42:59.054249   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:01.639025   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:01.651715   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:01.651793   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:01.685240   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:01.685309   51251 cri.go:89] found id: ""
	I1018 17:43:01.685339   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:01.685423   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:01.690385   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:01.690468   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:01.719962   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:01.720035   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:01.720055   51251 cri.go:89] found id: ""
	I1018 17:43:01.720076   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:01.720148   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:01.723990   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:01.727538   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:01.727607   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:01.756529   51251 cri.go:89] found id: ""
	I1018 17:43:01.756562   51251 logs.go:282] 0 containers: []
	W1018 17:43:01.756571   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:01.756595   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:01.756676   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:01.789556   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:01.789581   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:01.789586   51251 cri.go:89] found id: ""
	I1018 17:43:01.789594   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:01.789659   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:01.794374   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:01.798060   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:01.798129   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:01.833059   51251 cri.go:89] found id: ""
	I1018 17:43:01.833089   51251 logs.go:282] 0 containers: []
	W1018 17:43:01.833097   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:01.833103   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:01.833172   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:01.860988   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:01.861009   51251 cri.go:89] found id: ""
	I1018 17:43:01.861017   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:01.861076   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:01.865838   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:01.865913   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:01.893009   51251 cri.go:89] found id: ""
	I1018 17:43:01.893035   51251 logs.go:282] 0 containers: []
	W1018 17:43:01.893043   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:01.893052   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:01.893064   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:01.997703   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:01.997739   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:02.060549   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:02.060581   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:02.094970   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:02.095001   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:02.161721   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:02.161757   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:02.209000   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:02.209029   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:02.239896   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:02.239920   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:02.275701   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:02.275727   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:02.288373   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:02.288400   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:02.360448   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:02.351719    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.352549    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.354058    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.354626    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.356320    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:02.351719    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.352549    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.354058    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.354626    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:02.356320    2599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:02.360469   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:02.360481   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:02.390739   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:02.390769   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:04.978257   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:04.988916   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:04.989037   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:05.019550   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:05.019573   51251 cri.go:89] found id: ""
	I1018 17:43:05.019582   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:05.019646   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:05.023992   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:05.024069   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:05.050514   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:05.050533   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:05.050538   51251 cri.go:89] found id: ""
	I1018 17:43:05.050546   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:05.050601   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:05.054386   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:05.058083   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:05.058155   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:05.093052   51251 cri.go:89] found id: ""
	I1018 17:43:05.093079   51251 logs.go:282] 0 containers: []
	W1018 17:43:05.093088   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:05.093096   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:05.093200   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:05.124045   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:05.124115   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:05.124134   51251 cri.go:89] found id: ""
	I1018 17:43:05.124156   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:05.124238   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:05.129085   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:05.134571   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:05.134649   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:05.162401   51251 cri.go:89] found id: ""
	I1018 17:43:05.162423   51251 logs.go:282] 0 containers: []
	W1018 17:43:05.162432   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:05.162439   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:05.162505   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:05.191429   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:05.191451   51251 cri.go:89] found id: ""
	I1018 17:43:05.191459   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:05.191513   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:05.195222   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:05.195291   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:05.233765   51251 cri.go:89] found id: ""
	I1018 17:43:05.233789   51251 logs.go:282] 0 containers: []
	W1018 17:43:05.233797   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:05.233813   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:05.233824   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:05.314015   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:05.314049   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:05.343775   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:05.343799   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:05.447678   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:05.447715   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:05.461224   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:05.461251   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:05.531644   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:05.521503    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.523802    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.525607    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.526297    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.527849    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:05.521503    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.523802    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.525607    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.526297    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:05.527849    2698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:05.531668   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:05.531681   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:05.589572   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:05.589609   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:05.620844   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:05.620871   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:05.649833   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:05.649861   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:05.702301   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:05.702335   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:05.746579   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:05.746612   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:08.279428   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:08.290505   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:08.290572   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:08.323196   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:08.323217   51251 cri.go:89] found id: ""
	I1018 17:43:08.323225   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:08.323287   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:08.326970   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:08.327042   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:08.353811   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:08.353833   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:08.353837   51251 cri.go:89] found id: ""
	I1018 17:43:08.353845   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:08.353903   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:08.357796   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:08.361798   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:08.361874   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:08.390063   51251 cri.go:89] found id: ""
	I1018 17:43:08.390086   51251 logs.go:282] 0 containers: []
	W1018 17:43:08.390094   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:08.390104   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:08.390164   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:08.417117   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:08.417137   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:08.417142   51251 cri.go:89] found id: ""
	I1018 17:43:08.417153   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:08.417209   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:08.421291   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:08.424803   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:08.424875   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:08.450383   51251 cri.go:89] found id: ""
	I1018 17:43:08.450405   51251 logs.go:282] 0 containers: []
	W1018 17:43:08.450412   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:08.450419   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:08.450517   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:08.475291   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:08.475312   51251 cri.go:89] found id: ""
	I1018 17:43:08.475321   51251 logs.go:282] 1 containers: [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:08.475376   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:08.479043   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:08.479113   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:08.509786   51251 cri.go:89] found id: ""
	I1018 17:43:08.509809   51251 logs.go:282] 0 containers: []
	W1018 17:43:08.509817   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:08.509826   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:08.509838   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:08.605996   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:08.606031   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:08.622166   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:08.622201   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:08.702891   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:08.692116    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.693186    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.694251    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.694895    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.697165    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:08.692116    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.693186    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.694251    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.694895    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:08.697165    2820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:08.702955   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:08.702973   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:08.732447   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:08.732474   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:08.759641   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:08.759667   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:08.790348   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:08.790378   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:08.821468   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:08.821493   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:08.873070   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:08.873109   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:08.906030   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:08.906070   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:08.964907   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:08.964966   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:11.547663   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:11.559867   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:11.559932   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:11.595124   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:11.595143   51251 cri.go:89] found id: ""
	I1018 17:43:11.595151   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:11.595209   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.599553   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:11.599619   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:11.639738   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:11.639820   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:11.639844   51251 cri.go:89] found id: ""
	I1018 17:43:11.639865   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:11.639950   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.646442   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.651648   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:11.651787   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:11.695203   51251 cri.go:89] found id: ""
	I1018 17:43:11.695286   51251 logs.go:282] 0 containers: []
	W1018 17:43:11.695316   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:11.695337   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:11.695418   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:11.744347   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:11.744416   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:11.744441   51251 cri.go:89] found id: ""
	I1018 17:43:11.744463   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:11.744558   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.751191   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.755958   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:11.756105   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:11.791266   51251 cri.go:89] found id: ""
	I1018 17:43:11.791331   51251 logs.go:282] 0 containers: []
	W1018 17:43:11.791353   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:11.791383   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:11.791474   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:11.834876   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:11.834963   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:11.834989   51251 cri.go:89] found id: ""
	I1018 17:43:11.835011   51251 logs.go:282] 2 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:11.835086   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.841198   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:11.846580   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:11.846715   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:11.897749   51251 cri.go:89] found id: ""
	I1018 17:43:11.897822   51251 logs.go:282] 0 containers: []
	W1018 17:43:11.897846   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:11.897881   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:11.897928   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:11.943452   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:11.943536   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:12.005227   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:12.005338   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:12.062557   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:12.062624   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:12.182021   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:12.182095   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:12.197845   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:12.197920   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:12.260741   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:12.260817   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:12.335387   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:12.335466   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:12.369750   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:12.369775   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:12.449888   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:12.449923   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:12.545478   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:12.535379    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.536014    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.539746    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.540245    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.541774    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:12.535379    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.536014    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.539746    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.540245    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:12.541774    3028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:12.545496   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:12.545509   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:12.577372   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:12.577397   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:15.116790   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:15.132080   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:15.132161   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:15.159487   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:15.159506   51251 cri.go:89] found id: ""
	I1018 17:43:15.159515   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:15.159567   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.163178   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:15.163272   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:15.191277   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:15.191296   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:15.191300   51251 cri.go:89] found id: ""
	I1018 17:43:15.191315   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:15.191372   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.195019   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.198423   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:15.198491   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:15.225886   51251 cri.go:89] found id: ""
	I1018 17:43:15.225910   51251 logs.go:282] 0 containers: []
	W1018 17:43:15.225919   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:15.225925   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:15.225986   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:15.251392   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:15.251414   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:15.251419   51251 cri.go:89] found id: ""
	I1018 17:43:15.251426   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:15.251480   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.255201   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.258787   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:15.258880   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:15.285767   51251 cri.go:89] found id: ""
	I1018 17:43:15.285831   51251 logs.go:282] 0 containers: []
	W1018 17:43:15.285854   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:15.285878   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:15.285951   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:15.316160   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:15.316219   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:15.316239   51251 cri.go:89] found id: ""
	I1018 17:43:15.316261   51251 logs.go:282] 2 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:15.316333   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.320128   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:15.323596   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:15.323665   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:15.349496   51251 cri.go:89] found id: ""
	I1018 17:43:15.349522   51251 logs.go:282] 0 containers: []
	W1018 17:43:15.349531   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:15.349541   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:15.349569   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:15.420881   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:15.420916   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:15.451259   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:15.451285   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:15.548698   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:15.548740   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:15.561517   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:15.561546   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:15.608036   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:15.608071   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:15.641405   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:15.641431   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:15.668198   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:15.668226   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:15.694563   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:15.694591   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:15.770902   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:15.770936   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:15.836895   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:15.828987    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.829667    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.831325    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.831865    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.833343    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:15.828987    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.829667    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.831325    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.831865    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:15.833343    3175 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:15.836919   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:15.836931   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:15.865888   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:15.865916   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:18.408468   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:18.419326   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:18.419393   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:18.443753   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:18.443775   51251 cri.go:89] found id: ""
	I1018 17:43:18.443783   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:18.443839   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.447404   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:18.447481   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:18.473566   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:18.473627   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:18.473639   51251 cri.go:89] found id: ""
	I1018 17:43:18.473647   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:18.473702   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.477524   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.481293   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:18.481397   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:18.507887   51251 cri.go:89] found id: ""
	I1018 17:43:18.507965   51251 logs.go:282] 0 containers: []
	W1018 17:43:18.507991   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:18.508011   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:18.508082   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:18.534789   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:18.534809   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:18.534814   51251 cri.go:89] found id: ""
	I1018 17:43:18.534821   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:18.534876   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.538531   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.542059   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:18.542133   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:18.567277   51251 cri.go:89] found id: ""
	I1018 17:43:18.567299   51251 logs.go:282] 0 containers: []
	W1018 17:43:18.567307   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:18.567316   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:18.567375   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:18.593882   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:18.593902   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:18.593907   51251 cri.go:89] found id: ""
	I1018 17:43:18.593914   51251 logs.go:282] 2 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:18.593971   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.598057   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:18.601482   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:18.601548   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:18.626724   51251 cri.go:89] found id: ""
	I1018 17:43:18.626748   51251 logs.go:282] 0 containers: []
	W1018 17:43:18.626756   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:18.626766   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:18.626777   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:18.720186   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:18.720220   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:18.732342   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:18.732372   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:18.777781   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:18.777813   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:18.814519   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:18.814548   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:18.842102   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:18.842129   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:18.870191   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:18.870215   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:18.940137   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:18.931877    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.932545    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.934242    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.934870    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.936368    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:18.931877    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.932545    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.934242    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.934870    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:18.936368    3300 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:18.940159   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:18.940171   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:18.972118   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:18.972143   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:19.028698   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:19.028731   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:19.053561   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:19.053588   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:19.134177   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:19.134210   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:21.666074   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:21.677905   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:21.677982   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:21.710449   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:21.710470   51251 cri.go:89] found id: ""
	I1018 17:43:21.710479   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:21.710534   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.714253   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:21.714326   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:21.741478   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:21.741547   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:21.741558   51251 cri.go:89] found id: ""
	I1018 17:43:21.741566   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:21.741627   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.745535   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.750022   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:21.750140   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:21.780635   51251 cri.go:89] found id: ""
	I1018 17:43:21.780708   51251 logs.go:282] 0 containers: []
	W1018 17:43:21.780731   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:21.780778   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:21.780856   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:21.808496   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:21.808514   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:21.808518   51251 cri.go:89] found id: ""
	I1018 17:43:21.808525   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:21.808582   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.812401   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.815810   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:21.815876   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:21.845624   51251 cri.go:89] found id: ""
	I1018 17:43:21.845657   51251 logs.go:282] 0 containers: []
	W1018 17:43:21.845665   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:21.845672   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:21.845731   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:21.871314   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:21.871332   51251 cri.go:89] found id: "f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:21.871336   51251 cri.go:89] found id: ""
	I1018 17:43:21.871343   51251 logs.go:282] 2 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c]
	I1018 17:43:21.871399   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.875259   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:21.878771   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:21.878839   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:21.913289   51251 cri.go:89] found id: ""
	I1018 17:43:21.913312   51251 logs.go:282] 0 containers: []
	W1018 17:43:21.913321   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:21.913330   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:21.913341   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:21.990540   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:21.990577   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:22.023215   51251 logs.go:123] Gathering logs for kube-controller-manager [f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c] ...
	I1018 17:43:22.023243   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f14ea626a08f1be9764e93e3d254a289788a96d4eb1434de0caf13636575872c"
	I1018 17:43:22.053561   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:22.053588   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:22.081164   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:22.081191   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:22.145177   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:22.145212   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:22.184829   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:22.184859   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:22.228057   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:22.228081   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:22.316019   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:22.316053   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:22.347876   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:22.347901   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:22.450507   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:22.450541   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:22.462429   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:22.462456   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:22.536495   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:22.527657    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.528744    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.530446    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.531068    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.532737    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:22.527657    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.528744    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.530446    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.531068    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:22.532737    3486 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:25.036723   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:25.048068   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:25.048137   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:25.074496   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:25.074517   51251 cri.go:89] found id: ""
	I1018 17:43:25.074525   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:25.074581   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:25.078699   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:25.078775   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:25.106068   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:25.106088   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:25.106092   51251 cri.go:89] found id: ""
	I1018 17:43:25.106099   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:25.106154   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:25.109911   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:25.116299   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:25.116392   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:25.152465   51251 cri.go:89] found id: ""
	I1018 17:43:25.152545   51251 logs.go:282] 0 containers: []
	W1018 17:43:25.152568   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:25.152587   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:25.152679   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:25.179667   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:25.179690   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:25.179695   51251 cri.go:89] found id: ""
	I1018 17:43:25.179703   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:25.179762   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:25.183571   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:25.187316   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:25.187431   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:25.216762   51251 cri.go:89] found id: ""
	I1018 17:43:25.216796   51251 logs.go:282] 0 containers: []
	W1018 17:43:25.216805   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:25.216812   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:25.216871   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:25.244556   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:25.244578   51251 cri.go:89] found id: ""
	I1018 17:43:25.244587   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:25.244642   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:25.248407   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:25.248485   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:25.274854   51251 cri.go:89] found id: ""
	I1018 17:43:25.274879   51251 logs.go:282] 0 containers: []
	W1018 17:43:25.274888   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:25.274897   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:25.274908   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:25.331118   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:25.331153   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:25.411446   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:25.411478   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:25.462440   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:25.462467   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:25.525297   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:25.525373   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:25.555066   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:25.555092   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:25.581528   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:25.581558   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:25.682424   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:25.682461   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:25.695456   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:25.695486   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:25.766142   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:25.757215    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.757999    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.759442    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.759856    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.761265    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:25.757215    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.757999    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.759442    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.759856    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:25.761265    3610 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:25.766162   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:25.766174   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:25.795404   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:25.795433   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:28.337726   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:28.348255   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:28.348338   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:28.382821   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:28.382841   51251 cri.go:89] found id: ""
	I1018 17:43:28.382849   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:28.382903   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:28.386571   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:28.386653   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:28.418956   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:28.418976   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:28.418981   51251 cri.go:89] found id: ""
	I1018 17:43:28.418988   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:28.419041   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:28.422637   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:28.426047   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:28.426115   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:28.450805   51251 cri.go:89] found id: ""
	I1018 17:43:28.450826   51251 logs.go:282] 0 containers: []
	W1018 17:43:28.450834   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:28.450841   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:28.450897   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:28.476049   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:28.476069   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:28.476075   51251 cri.go:89] found id: ""
	I1018 17:43:28.476083   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:28.476137   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:28.479674   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:28.483214   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:28.483280   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:28.509438   51251 cri.go:89] found id: ""
	I1018 17:43:28.509460   51251 logs.go:282] 0 containers: []
	W1018 17:43:28.509468   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:28.509475   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:28.509531   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:28.536762   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:28.536783   51251 cri.go:89] found id: ""
	I1018 17:43:28.536791   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:28.536846   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:28.540786   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:28.540849   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:28.566044   51251 cri.go:89] found id: ""
	I1018 17:43:28.566066   51251 logs.go:282] 0 containers: []
	W1018 17:43:28.566076   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:28.566085   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:28.566126   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:28.668507   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:28.668548   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:28.696140   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:28.696166   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:28.742992   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:28.743028   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:28.773720   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:28.773749   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:28.800871   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:28.800897   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:28.812516   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:28.812544   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:28.881394   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:28.872850    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.873551    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.875119    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.875694    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.877437    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:28.872850    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.873551    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.875119    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.875694    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:28.877437    3737 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:28.881466   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:28.881493   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:28.920319   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:28.920351   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:29.001463   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:29.001501   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:29.080673   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:29.080705   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:31.615872   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:31.627104   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:31.627173   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:31.652790   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:31.652812   51251 cri.go:89] found id: ""
	I1018 17:43:31.652820   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:31.652880   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:31.656835   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:31.656905   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:31.684663   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:31.684685   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:31.684690   51251 cri.go:89] found id: ""
	I1018 17:43:31.684698   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:31.684752   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:31.688556   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:31.692271   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:31.692343   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:31.720037   51251 cri.go:89] found id: ""
	I1018 17:43:31.720059   51251 logs.go:282] 0 containers: []
	W1018 17:43:31.720067   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:31.720074   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:31.720130   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:31.745058   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:31.745078   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:31.745083   51251 cri.go:89] found id: ""
	I1018 17:43:31.745090   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:31.745144   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:31.748688   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:31.752002   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:31.752068   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:31.780253   51251 cri.go:89] found id: ""
	I1018 17:43:31.780275   51251 logs.go:282] 0 containers: []
	W1018 17:43:31.780283   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:31.780289   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:31.780346   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:31.806333   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:31.806358   51251 cri.go:89] found id: ""
	I1018 17:43:31.806365   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:31.806429   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:31.810331   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:31.810403   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:31.836140   51251 cri.go:89] found id: ""
	I1018 17:43:31.836205   51251 logs.go:282] 0 containers: []
	W1018 17:43:31.836227   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:31.836250   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:31.836292   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:31.874437   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:31.874512   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:31.901146   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:31.901171   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:31.998418   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:31.998452   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:32.014569   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:32.014606   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:32.063231   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:32.063266   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:32.130021   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:32.130061   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:32.160724   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:32.160761   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:32.239135   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:32.239173   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:32.285504   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:32.285531   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:32.361004   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:32.352916    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.353683    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.355270    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.355600    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.357143    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:32.352916    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.353683    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.355270    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.355600    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:32.357143    3895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:32.361029   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:32.361042   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:34.888854   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:34.901112   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:34.901187   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:34.929962   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:34.929982   51251 cri.go:89] found id: ""
	I1018 17:43:34.929990   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:34.930044   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:34.933771   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:34.933840   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:34.974958   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:34.974990   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:34.974994   51251 cri.go:89] found id: ""
	I1018 17:43:34.975002   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:34.975063   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:34.979007   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:34.982588   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:34.982669   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:35.025772   51251 cri.go:89] found id: ""
	I1018 17:43:35.025794   51251 logs.go:282] 0 containers: []
	W1018 17:43:35.025802   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:35.025808   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:35.025867   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:35.054583   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:35.054606   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:35.054611   51251 cri.go:89] found id: ""
	I1018 17:43:35.054619   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:35.054683   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:35.058624   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:35.062166   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:35.062249   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:35.099459   51251 cri.go:89] found id: ""
	I1018 17:43:35.099482   51251 logs.go:282] 0 containers: []
	W1018 17:43:35.099490   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:35.099497   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:35.099553   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:35.135905   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:35.135927   51251 cri.go:89] found id: ""
	I1018 17:43:35.135936   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:35.135993   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:35.139558   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:35.139675   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:35.167854   51251 cri.go:89] found id: ""
	I1018 17:43:35.167877   51251 logs.go:282] 0 containers: []
	W1018 17:43:35.167886   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:35.167895   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:35.167906   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:35.268911   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:35.268953   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:35.351239   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:35.342070    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.342707    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.344447    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.345185    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.346039    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:35.342070    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.342707    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.344447    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.345185    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:35.346039    3975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:35.351259   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:35.351271   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:35.414894   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:35.414928   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:35.449804   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:35.449834   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:35.506409   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:35.506445   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:35.595870   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:35.595911   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:35.608335   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:35.608364   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:35.639546   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:35.639574   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:35.667961   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:35.667987   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:35.698739   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:35.698763   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:38.237278   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:38.248092   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:38.248161   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:38.274867   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:38.274888   51251 cri.go:89] found id: ""
	I1018 17:43:38.274896   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:38.274965   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:38.278707   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:38.278774   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:38.304232   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:38.304252   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:38.304256   51251 cri.go:89] found id: ""
	I1018 17:43:38.304264   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:38.304317   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:38.309670   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:38.313425   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:38.313497   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:38.344118   51251 cri.go:89] found id: ""
	I1018 17:43:38.344140   51251 logs.go:282] 0 containers: []
	W1018 17:43:38.344149   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:38.344156   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:38.344214   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:38.376271   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:38.376294   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:38.376298   51251 cri.go:89] found id: ""
	I1018 17:43:38.376316   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:38.376373   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:38.380454   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:38.384255   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:38.384326   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:38.409931   51251 cri.go:89] found id: ""
	I1018 17:43:38.409955   51251 logs.go:282] 0 containers: []
	W1018 17:43:38.409963   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:38.409977   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:38.410038   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:38.436568   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:38.436591   51251 cri.go:89] found id: ""
	I1018 17:43:38.436600   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:38.436672   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:38.440383   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:38.440477   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:38.468084   51251 cri.go:89] found id: ""
	I1018 17:43:38.468161   51251 logs.go:282] 0 containers: []
	W1018 17:43:38.468184   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:38.468206   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:38.468228   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:38.565168   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:38.565204   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:38.577269   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:38.577297   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:38.646729   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:38.638445    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.639186    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.640793    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.641395    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.643175    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:38.638445    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.639186    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.640793    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.641395    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:38.643175    4115 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:38.646754   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:38.646768   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:38.673481   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:38.673507   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:38.719835   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:38.719871   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:38.752322   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:38.752362   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:38.783579   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:38.783606   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:38.820293   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:38.820322   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:38.878730   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:38.878761   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:38.907670   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:38.907740   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:41.489854   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:41.500771   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:41.500872   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:41.526674   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:41.526696   51251 cri.go:89] found id: ""
	I1018 17:43:41.526706   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:41.526770   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:41.531078   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:41.531191   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:41.562796   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:41.562823   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:41.562829   51251 cri.go:89] found id: ""
	I1018 17:43:41.562837   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:41.562959   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:41.566913   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:41.570998   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:41.571118   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:41.597622   51251 cri.go:89] found id: ""
	I1018 17:43:41.597647   51251 logs.go:282] 0 containers: []
	W1018 17:43:41.597655   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:41.597662   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:41.597720   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:41.627549   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:41.627570   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:41.627575   51251 cri.go:89] found id: ""
	I1018 17:43:41.627583   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:41.627642   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:41.631299   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:41.635563   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:41.635662   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:41.662146   51251 cri.go:89] found id: ""
	I1018 17:43:41.662170   51251 logs.go:282] 0 containers: []
	W1018 17:43:41.662179   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:41.662185   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:41.662244   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:41.693012   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:41.693038   51251 cri.go:89] found id: ""
	I1018 17:43:41.693047   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:41.693132   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:41.697195   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:41.697265   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:41.729826   51251 cri.go:89] found id: ""
	I1018 17:43:41.729850   51251 logs.go:282] 0 containers: []
	W1018 17:43:41.729859   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:41.729869   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:41.729880   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:41.828078   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:41.828110   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:41.901435   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:41.892987    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.893726    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.895255    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.895832    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.897510    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:41.892987    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.893726    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.895255    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.895832    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:41.897510    4248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:41.901459   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:41.901472   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:41.929914   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:41.929989   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:41.987757   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:41.987802   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:42.039791   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:42.039830   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:42.075456   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:42.075487   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:42.149099   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:42.149132   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:42.164617   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:42.164650   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:42.257289   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:42.257327   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:42.287081   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:42.287112   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:44.874333   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:44.884870   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:44.884968   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:44.912153   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:44.912175   51251 cri.go:89] found id: ""
	I1018 17:43:44.912183   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:44.912237   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:44.915849   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:44.915919   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:44.942584   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:44.942604   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:44.942609   51251 cri.go:89] found id: ""
	I1018 17:43:44.942616   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:44.942668   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:44.946463   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:44.949841   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:44.949907   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:44.986621   51251 cri.go:89] found id: ""
	I1018 17:43:44.986646   51251 logs.go:282] 0 containers: []
	W1018 17:43:44.986654   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:44.986661   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:44.986718   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:45.029811   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:45.029830   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:45.029835   51251 cri.go:89] found id: ""
	I1018 17:43:45.029843   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:45.029908   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:45.035692   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:45.040000   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:45.040078   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:45.098723   51251 cri.go:89] found id: ""
	I1018 17:43:45.098751   51251 logs.go:282] 0 containers: []
	W1018 17:43:45.098760   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:45.098770   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:45.098843   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:45.162198   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:45.162228   51251 cri.go:89] found id: ""
	I1018 17:43:45.162238   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:45.162307   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:45.167619   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:45.167700   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:45.211984   51251 cri.go:89] found id: ""
	I1018 17:43:45.212008   51251 logs.go:282] 0 containers: []
	W1018 17:43:45.212018   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:45.212028   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:45.212041   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:45.226821   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:45.226851   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:45.337585   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:45.321955    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.322823    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.324086    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.327115    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.329027    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:45.321955    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.322823    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.324086    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.327115    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:45.329027    4382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:45.337625   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:45.337641   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:45.377460   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:45.377491   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:45.429187   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:45.429222   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:45.457994   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:45.458022   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:45.540761   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:45.540797   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:45.573633   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:45.573662   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:45.672580   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:45.672617   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:45.706688   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:45.706720   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:45.783083   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:45.783120   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:48.314260   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:48.324891   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:48.324985   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:48.357904   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:48.357927   51251 cri.go:89] found id: ""
	I1018 17:43:48.357940   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:48.357997   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:48.362392   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:48.362474   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:48.397905   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:48.397927   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:48.397932   51251 cri.go:89] found id: ""
	I1018 17:43:48.397940   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:48.397993   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:48.401719   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:48.404922   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:48.405019   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:48.431573   51251 cri.go:89] found id: ""
	I1018 17:43:48.431598   51251 logs.go:282] 0 containers: []
	W1018 17:43:48.431606   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:48.431613   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:48.431673   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:48.458728   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:48.458755   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:48.458760   51251 cri.go:89] found id: ""
	I1018 17:43:48.458767   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:48.458824   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:48.462488   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:48.465841   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:48.465909   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:48.491719   51251 cri.go:89] found id: ""
	I1018 17:43:48.491741   51251 logs.go:282] 0 containers: []
	W1018 17:43:48.491749   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:48.491755   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:48.491815   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:48.522124   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:48.522189   51251 cri.go:89] found id: ""
	I1018 17:43:48.522211   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:48.522292   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:48.526320   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:48.526407   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:48.552413   51251 cri.go:89] found id: ""
	I1018 17:43:48.552436   51251 logs.go:282] 0 containers: []
	W1018 17:43:48.552445   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:48.552454   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:48.552471   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:48.647083   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:48.647114   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:48.660735   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:48.660768   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:48.690812   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:48.690837   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:48.721178   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:48.721208   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:48.748549   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:48.748617   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:48.823598   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:48.823637   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:48.855654   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:48.855680   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:48.931642   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:48.922606    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.923296    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.925195    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.925885    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.928154    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:48.922606    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.923296    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.925195    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.925885    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:48.928154    4557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:48.931664   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:48.931678   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:48.984964   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:48.985003   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:49.022359   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:49.022391   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:51.581690   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:51.592535   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:51.592618   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:51.621442   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:51.621470   51251 cri.go:89] found id: ""
	I1018 17:43:51.621479   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:51.621535   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:51.625435   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:51.625513   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:51.653328   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:51.653354   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:51.653360   51251 cri.go:89] found id: ""
	I1018 17:43:51.653367   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:51.653425   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:51.657372   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:51.660911   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:51.661083   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:51.687435   51251 cri.go:89] found id: ""
	I1018 17:43:51.687456   51251 logs.go:282] 0 containers: []
	W1018 17:43:51.687465   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:51.687472   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:51.687533   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:51.716167   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:51.716189   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:51.716194   51251 cri.go:89] found id: ""
	I1018 17:43:51.716201   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:51.716256   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:51.719950   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:51.723494   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:51.723575   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:51.752147   51251 cri.go:89] found id: ""
	I1018 17:43:51.752171   51251 logs.go:282] 0 containers: []
	W1018 17:43:51.752180   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:51.752186   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:51.752245   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:51.779213   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:51.779236   51251 cri.go:89] found id: ""
	I1018 17:43:51.779244   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:51.779305   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:51.782913   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:51.782986   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:51.810202   51251 cri.go:89] found id: ""
	I1018 17:43:51.810228   51251 logs.go:282] 0 containers: []
	W1018 17:43:51.810236   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:51.810246   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:51.810258   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:51.824029   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:51.824058   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:51.894919   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:51.886698    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.887712    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.889389    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.889843    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.891356    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:51.886698    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.887712    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.889389    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.889843    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:51.891356    4657 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:51.894983   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:51.895002   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:51.955232   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:51.955263   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:51.990622   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:51.990651   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:52.020376   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:52.020405   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:52.066713   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:52.066740   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:52.172061   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:52.172103   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:52.214913   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:52.214938   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:52.251763   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:52.251854   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:52.311510   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:52.311541   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:54.894390   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:54.907290   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:54.907366   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:54.940172   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:54.940196   51251 cri.go:89] found id: ""
	I1018 17:43:54.940204   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:54.940260   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:54.943992   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:54.944086   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:54.978188   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:54.978210   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:54.978214   51251 cri.go:89] found id: ""
	I1018 17:43:54.978222   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:54.978282   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:54.982194   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:54.986022   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:54.986121   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:55.029209   51251 cri.go:89] found id: ""
	I1018 17:43:55.029239   51251 logs.go:282] 0 containers: []
	W1018 17:43:55.029248   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:55.029256   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:55.029318   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:55.057246   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:55.057271   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:55.057276   51251 cri.go:89] found id: ""
	I1018 17:43:55.057283   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:55.057336   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:55.061051   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:55.064367   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:55.064436   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:55.095243   51251 cri.go:89] found id: ""
	I1018 17:43:55.095307   51251 logs.go:282] 0 containers: []
	W1018 17:43:55.095329   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:55.095341   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:55.095399   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:55.122785   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:55.122804   51251 cri.go:89] found id: ""
	I1018 17:43:55.122813   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:55.122876   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:55.132639   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:55.132738   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:55.162942   51251 cri.go:89] found id: ""
	I1018 17:43:55.162977   51251 logs.go:282] 0 containers: []
	W1018 17:43:55.162986   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:55.163011   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:55.163032   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:55.228280   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:55.228312   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:55.259473   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:55.259500   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:43:55.292185   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:55.292220   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:55.341717   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:55.341749   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:55.375698   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:55.375727   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:55.402916   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:55.402942   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:55.490846   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:55.490886   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:55.587437   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:55.587478   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:55.600254   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:55.600280   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:55.666266   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:55.657772    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.658733    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.660294    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.660924    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.662498    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:55.657772    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.658733    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.660294    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.660924    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:55.662498    4845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:55.666289   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:55.666311   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:58.191608   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:43:58.207197   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:43:58.207266   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:43:58.241572   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:58.241593   51251 cri.go:89] found id: ""
	I1018 17:43:58.241602   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:43:58.241656   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:58.245301   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:43:58.245380   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:43:58.275809   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:58.275830   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:58.275835   51251 cri.go:89] found id: ""
	I1018 17:43:58.275842   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:43:58.275898   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:58.279806   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:58.283389   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:43:58.283459   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:43:58.312440   51251 cri.go:89] found id: ""
	I1018 17:43:58.312464   51251 logs.go:282] 0 containers: []
	W1018 17:43:58.312472   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:43:58.312479   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:43:58.312535   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:43:58.341315   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:58.341341   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:58.341346   51251 cri.go:89] found id: ""
	I1018 17:43:58.341354   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:43:58.341418   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:58.345155   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:58.348837   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:43:58.348906   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:43:58.375741   51251 cri.go:89] found id: ""
	I1018 17:43:58.375811   51251 logs.go:282] 0 containers: []
	W1018 17:43:58.375843   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:43:58.375861   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:43:58.375951   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:43:58.402340   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:58.402361   51251 cri.go:89] found id: ""
	I1018 17:43:58.402369   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:43:58.402424   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:43:58.406046   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:43:58.406112   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:43:58.430628   51251 cri.go:89] found id: ""
	I1018 17:43:58.430701   51251 logs.go:282] 0 containers: []
	W1018 17:43:58.430717   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:43:58.430727   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:43:58.430737   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:43:58.524428   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:43:58.524462   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:43:58.581885   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:43:58.581916   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:43:58.611949   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:43:58.611979   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:43:58.693414   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:43:58.693450   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:43:58.705470   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:43:58.705496   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:43:58.771817   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:43:58.763821    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.764175    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.765665    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.766083    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.767558    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:43:58.763821    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.764175    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.765665    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.766083    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:43:58.767558    4948 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:43:58.771836   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:43:58.771847   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:43:58.798225   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:43:58.798252   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:43:58.848969   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:43:58.849000   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:43:58.887826   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:43:58.887856   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:43:58.914297   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:43:58.914322   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:01.448548   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:01.459433   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:01.459507   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:01.490534   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:01.490566   51251 cri.go:89] found id: ""
	I1018 17:44:01.490575   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:01.490649   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:01.494451   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:01.494547   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:01.522081   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:01.522104   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:01.522109   51251 cri.go:89] found id: ""
	I1018 17:44:01.522117   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:01.522175   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:01.526069   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:01.529977   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:01.530054   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:01.557411   51251 cri.go:89] found id: ""
	I1018 17:44:01.557433   51251 logs.go:282] 0 containers: []
	W1018 17:44:01.557442   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:01.557448   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:01.557508   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:01.585118   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:01.585142   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:01.585147   51251 cri.go:89] found id: ""
	I1018 17:44:01.585155   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:01.585218   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:01.588900   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:01.592735   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:01.592820   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:01.621026   51251 cri.go:89] found id: ""
	I1018 17:44:01.621098   51251 logs.go:282] 0 containers: []
	W1018 17:44:01.621121   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:01.621140   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:01.621227   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:01.649479   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:01.649503   51251 cri.go:89] found id: ""
	I1018 17:44:01.649512   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:01.649576   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:01.653509   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:01.653601   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:01.680380   51251 cri.go:89] found id: ""
	I1018 17:44:01.680405   51251 logs.go:282] 0 containers: []
	W1018 17:44:01.680413   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:01.680445   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:01.680470   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:01.719413   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:01.719445   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:01.778065   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:01.778113   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:01.863062   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:01.863098   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:01.933290   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:01.925181    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.926041    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.926645    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.928011    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.928516    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:01.925181    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.926041    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.926645    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.928011    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:01.928516    5079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:01.933312   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:01.933325   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:01.994141   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:01.994175   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:02.027406   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:02.027433   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:02.058305   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:02.058374   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:02.089161   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:02.089238   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:02.197504   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:02.197547   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:02.220679   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:02.220704   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:04.749655   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:04.761329   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:04.761399   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:04.791310   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:04.791330   51251 cri.go:89] found id: ""
	I1018 17:44:04.791338   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:04.791391   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:04.795236   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:04.795315   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:04.826977   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:04.826999   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:04.827004   51251 cri.go:89] found id: ""
	I1018 17:44:04.827012   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:04.827071   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:04.831056   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:04.834547   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:04.834619   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:04.861994   51251 cri.go:89] found id: ""
	I1018 17:44:04.862019   51251 logs.go:282] 0 containers: []
	W1018 17:44:04.862028   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:04.862036   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:04.862093   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:04.891547   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:04.891568   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:04.891573   51251 cri.go:89] found id: ""
	I1018 17:44:04.891580   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:04.891664   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:04.895286   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:04.898803   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:04.898879   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:04.925892   51251 cri.go:89] found id: ""
	I1018 17:44:04.925917   51251 logs.go:282] 0 containers: []
	W1018 17:44:04.925925   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:04.925932   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:04.925992   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:04.950898   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:04.950920   51251 cri.go:89] found id: ""
	I1018 17:44:04.950937   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:04.950992   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:04.954458   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:04.954524   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:04.985795   51251 cri.go:89] found id: ""
	I1018 17:44:04.985818   51251 logs.go:282] 0 containers: []
	W1018 17:44:04.985826   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:04.985845   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:04.985857   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:05.039846   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:05.039880   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:05.074700   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:05.074733   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:05.123696   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:05.123722   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:05.162141   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:05.162168   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:05.233397   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:05.233431   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:05.260751   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:05.260780   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:05.342549   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:05.342585   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:05.374809   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:05.374833   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:05.480225   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:05.480260   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:05.492409   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:05.492433   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:05.563815   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:05.554079    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.554775    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.556564    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.557183    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.558926    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:05.554079    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.554775    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.556564    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.557183    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:05.558926    5263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:08.065115   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:08.076338   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:08.076434   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:08.104997   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:08.105072   51251 cri.go:89] found id: ""
	I1018 17:44:08.105096   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:08.105171   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:08.109342   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:08.109473   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:08.142036   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:08.142059   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:08.142063   51251 cri.go:89] found id: ""
	I1018 17:44:08.142071   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:08.142127   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:08.145811   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:08.149071   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:08.149138   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:08.178455   51251 cri.go:89] found id: ""
	I1018 17:44:08.178476   51251 logs.go:282] 0 containers: []
	W1018 17:44:08.178485   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:08.178491   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:08.178547   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:08.211837   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:08.211858   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:08.211862   51251 cri.go:89] found id: ""
	I1018 17:44:08.211871   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:08.211926   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:08.215306   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:08.218688   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:08.218753   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:08.245955   51251 cri.go:89] found id: ""
	I1018 17:44:08.245978   51251 logs.go:282] 0 containers: []
	W1018 17:44:08.245987   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:08.245994   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:08.246072   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:08.277970   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:08.277992   51251 cri.go:89] found id: ""
	I1018 17:44:08.278011   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:08.278083   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:08.281866   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:08.281956   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:08.314813   51251 cri.go:89] found id: ""
	I1018 17:44:08.314835   51251 logs.go:282] 0 containers: []
	W1018 17:44:08.314844   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:08.314853   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:08.314888   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:08.326805   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:08.326836   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:08.360439   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:08.360467   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:08.388919   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:08.388973   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:08.486321   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:08.486351   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:08.552337   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:08.544684    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.545314    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.546893    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.547374    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.548846    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:08.544684    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.545314    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.546893    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.547374    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:08.548846    5352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:08.552356   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:08.552369   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:08.577416   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:08.577441   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:08.629938   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:08.629973   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:08.689554   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:08.689585   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:08.719107   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:08.719132   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:08.799512   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:08.799588   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:11.341509   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:11.352018   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:11.352091   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:11.378915   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:11.378937   51251 cri.go:89] found id: ""
	I1018 17:44:11.378946   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:11.379001   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:11.382407   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:11.382471   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:11.407787   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:11.407806   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:11.407811   51251 cri.go:89] found id: ""
	I1018 17:44:11.407818   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:11.407902   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:11.411921   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:11.415171   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:11.415239   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:11.440964   51251 cri.go:89] found id: ""
	I1018 17:44:11.440986   51251 logs.go:282] 0 containers: []
	W1018 17:44:11.440995   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:11.441001   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:11.441056   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:11.470489   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:11.470512   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:11.470516   51251 cri.go:89] found id: ""
	I1018 17:44:11.470523   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:11.470579   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:11.474310   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:11.477884   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:11.477960   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:11.504799   51251 cri.go:89] found id: ""
	I1018 17:44:11.504862   51251 logs.go:282] 0 containers: []
	W1018 17:44:11.504885   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:11.504906   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:11.505006   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:11.533920   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:11.533983   51251 cri.go:89] found id: ""
	I1018 17:44:11.534003   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:11.534091   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:11.537702   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:11.537789   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:11.564923   51251 cri.go:89] found id: ""
	I1018 17:44:11.565058   51251 logs.go:282] 0 containers: []
	W1018 17:44:11.565068   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:11.565077   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:11.565089   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:11.576916   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:11.577027   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:11.644089   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:11.636599    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.637224    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.638751    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.639193    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.640642    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:11.636599    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.637224    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.638751    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.639193    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:11.640642    5471 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:11.644109   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:11.644123   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:11.698636   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:11.698669   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:11.760923   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:11.760958   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:11.787821   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:11.787851   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:11.820451   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:11.820482   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:11.851416   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:11.851442   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:11.946634   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:11.946674   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:11.975802   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:11.975830   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:12.010031   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:12.010112   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:14.600286   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:14.611078   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:14.611145   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:14.638095   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:14.638116   51251 cri.go:89] found id: ""
	I1018 17:44:14.638124   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:14.638205   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:14.641787   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:14.641856   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:14.668881   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:14.668904   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:14.668910   51251 cri.go:89] found id: ""
	I1018 17:44:14.668918   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:14.669001   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:14.672474   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:14.675764   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:14.675840   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:14.699628   51251 cri.go:89] found id: ""
	I1018 17:44:14.699652   51251 logs.go:282] 0 containers: []
	W1018 17:44:14.699660   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:14.699666   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:14.699723   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:14.724155   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:14.724177   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:14.724182   51251 cri.go:89] found id: ""
	I1018 17:44:14.724190   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:14.724260   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:14.728073   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:14.731467   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:14.731534   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:14.757304   51251 cri.go:89] found id: ""
	I1018 17:44:14.757327   51251 logs.go:282] 0 containers: []
	W1018 17:44:14.757354   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:14.757361   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:14.757420   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:14.784778   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:14.784799   51251 cri.go:89] found id: ""
	I1018 17:44:14.784808   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:14.784862   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:14.788408   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:14.788477   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:14.819756   51251 cri.go:89] found id: ""
	I1018 17:44:14.819778   51251 logs.go:282] 0 containers: []
	W1018 17:44:14.819796   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:14.819805   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:14.819816   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:14.844668   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:14.844698   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:14.876534   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:14.876564   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:14.980256   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:14.980340   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:15.044346   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:15.044386   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:15.121677   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:15.121713   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:15.203393   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:15.203428   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:15.219368   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:15.219394   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:15.296726   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:15.289112    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.289522    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.291014    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.291333    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.292981    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:15.289112    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.289522    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.291014    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.291333    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:15.292981    5647 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:15.296748   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:15.296761   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:15.322490   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:15.322516   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:15.364728   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:15.364760   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:17.892524   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:17.903413   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:17.903482   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:17.931967   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:17.931989   51251 cri.go:89] found id: ""
	I1018 17:44:17.931997   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:17.932052   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:17.935895   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:17.936007   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:17.983924   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:17.983945   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:17.983950   51251 cri.go:89] found id: ""
	I1018 17:44:17.983958   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:17.984014   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:17.987660   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:17.991127   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:17.991201   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:18.022803   51251 cri.go:89] found id: ""
	I1018 17:44:18.022827   51251 logs.go:282] 0 containers: []
	W1018 17:44:18.022836   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:18.022843   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:18.022906   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:18.064735   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:18.064754   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:18.064759   51251 cri.go:89] found id: ""
	I1018 17:44:18.064767   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:18.064823   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:18.068536   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:18.072878   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:18.072982   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:18.100206   51251 cri.go:89] found id: ""
	I1018 17:44:18.100237   51251 logs.go:282] 0 containers: []
	W1018 17:44:18.100246   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:18.100253   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:18.100321   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:18.127552   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:18.127575   51251 cri.go:89] found id: ""
	I1018 17:44:18.127584   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:18.127641   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:18.131667   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:18.131732   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:18.162707   51251 cri.go:89] found id: ""
	I1018 17:44:18.162731   51251 logs.go:282] 0 containers: []
	W1018 17:44:18.162739   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:18.162748   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:18.162763   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:18.246228   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:18.238684    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.239276    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.240721    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.241146    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.242608    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:18.238684    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.239276    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.240721    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.241146    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:18.242608    5739 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:18.246250   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:18.246263   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:18.277740   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:18.277764   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:18.343394   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:18.343427   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:18.383823   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:18.383854   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:18.443389   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:18.443420   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:18.469522   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:18.469550   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:18.545455   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:18.545487   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:18.592352   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:18.592376   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:18.695698   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:18.695735   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:18.707163   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:18.707192   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:21.235420   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:21.245952   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:21.246019   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:21.271930   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:21.271997   51251 cri.go:89] found id: ""
	I1018 17:44:21.272019   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:21.272106   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:21.275968   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:21.276036   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:21.302979   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:21.302997   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:21.303001   51251 cri.go:89] found id: ""
	I1018 17:44:21.303008   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:21.303069   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:21.307879   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:21.311562   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:21.311627   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:21.339660   51251 cri.go:89] found id: ""
	I1018 17:44:21.339681   51251 logs.go:282] 0 containers: []
	W1018 17:44:21.339690   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:21.339695   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:21.339752   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:21.368389   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:21.368411   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:21.368416   51251 cri.go:89] found id: ""
	I1018 17:44:21.368424   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:21.368478   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:21.372383   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:21.375709   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:21.375779   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:21.401944   51251 cri.go:89] found id: ""
	I1018 17:44:21.402017   51251 logs.go:282] 0 containers: []
	W1018 17:44:21.402040   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:21.402058   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:21.402140   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:21.428284   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:21.428303   51251 cri.go:89] found id: ""
	I1018 17:44:21.428312   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:21.428392   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:21.432085   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:21.432163   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:21.456804   51251 cri.go:89] found id: ""
	I1018 17:44:21.456878   51251 logs.go:282] 0 containers: []
	W1018 17:44:21.456899   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:21.456922   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:21.456987   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:21.530466   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:21.522476    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.523226    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.524791    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.525409    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.526934    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:21.522476    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.523226    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.524791    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.525409    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:21.526934    5877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:21.530487   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:21.530500   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:21.583954   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:21.583988   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:21.624634   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:21.624667   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:21.683522   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:21.683555   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:21.712030   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:21.712058   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:21.743203   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:21.743227   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:21.823114   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:21.823149   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:21.854521   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:21.854548   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:21.957239   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:21.957276   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:21.974988   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:21.975013   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:24.514740   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:24.525668   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:24.525738   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:24.553057   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:24.553087   51251 cri.go:89] found id: ""
	I1018 17:44:24.553096   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:24.553152   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:24.556981   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:24.557053   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:24.583773   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:24.583796   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:24.583801   51251 cri.go:89] found id: ""
	I1018 17:44:24.583809   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:24.583864   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:24.587649   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:24.591283   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:24.591388   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:24.617918   51251 cri.go:89] found id: ""
	I1018 17:44:24.617940   51251 logs.go:282] 0 containers: []
	W1018 17:44:24.617949   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:24.617959   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:24.618025   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:24.643293   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:24.643319   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:24.643323   51251 cri.go:89] found id: ""
	I1018 17:44:24.643331   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:24.643391   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:24.647045   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:24.650422   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:24.650491   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:24.676556   51251 cri.go:89] found id: ""
	I1018 17:44:24.676629   51251 logs.go:282] 0 containers: []
	W1018 17:44:24.676652   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:24.676670   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:24.676753   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:24.703335   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:24.703354   51251 cri.go:89] found id: ""
	I1018 17:44:24.703362   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:24.703413   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:24.707043   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:24.707112   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:24.736770   51251 cri.go:89] found id: ""
	I1018 17:44:24.736793   51251 logs.go:282] 0 containers: []
	W1018 17:44:24.736802   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:24.736811   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:24.736821   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:24.831690   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:24.831725   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:24.845067   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:24.845094   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:24.915666   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:24.907247    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.907870    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.909378    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.910211    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.911689    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:24.907247    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.907870    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.909378    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.910211    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:24.911689    6020 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:24.915715   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:24.915728   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:24.980758   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:24.980794   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:25.013913   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:25.013944   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:25.095710   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:25.095746   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:25.136366   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:25.136395   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:25.167081   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:25.167108   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:25.217068   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:25.217106   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:25.250444   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:25.250477   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:27.778976   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:27.789442   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:27.789511   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:27.816188   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:27.816211   51251 cri.go:89] found id: ""
	I1018 17:44:27.816219   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:27.816273   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:27.819794   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:27.819867   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:27.846400   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:27.846433   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:27.846439   51251 cri.go:89] found id: ""
	I1018 17:44:27.846461   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:27.846546   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:27.850346   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:27.853879   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:27.853956   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:27.880448   51251 cri.go:89] found id: ""
	I1018 17:44:27.880471   51251 logs.go:282] 0 containers: []
	W1018 17:44:27.880480   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:27.880486   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:27.880549   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:27.908354   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:27.908384   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:27.908389   51251 cri.go:89] found id: ""
	I1018 17:44:27.908397   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:27.908454   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:27.913635   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:27.917518   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:27.917589   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:27.944652   51251 cri.go:89] found id: ""
	I1018 17:44:27.944674   51251 logs.go:282] 0 containers: []
	W1018 17:44:27.944683   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:27.944689   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:27.944749   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:27.978127   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:27.978150   51251 cri.go:89] found id: ""
	I1018 17:44:27.978158   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:27.978217   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:27.982028   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:27.982097   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:28.010364   51251 cri.go:89] found id: ""
	I1018 17:44:28.010395   51251 logs.go:282] 0 containers: []
	W1018 17:44:28.010405   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:28.010414   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:28.010426   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:28.113197   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:28.113275   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:28.143438   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:28.143464   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:28.193919   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:28.193956   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:28.233324   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:28.233364   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:28.315086   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:28.315121   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:28.327446   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:28.327472   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:28.403227   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:28.392160    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.393002    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.395106    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.395823    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.397363    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:28.392160    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.393002    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.395106    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.395823    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:28.397363    6186 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:28.403250   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:28.403262   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:28.467992   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:28.468024   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:28.495923   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:28.495947   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:28.526646   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:28.526674   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:31.058337   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:31.069976   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:31.070050   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:31.101306   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:31.101328   51251 cri.go:89] found id: ""
	I1018 17:44:31.101336   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:31.101399   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:31.105055   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:31.105128   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:31.142563   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:31.142588   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:31.142593   51251 cri.go:89] found id: ""
	I1018 17:44:31.142600   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:31.142662   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:31.146604   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:31.150365   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:31.150435   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:31.176760   51251 cri.go:89] found id: ""
	I1018 17:44:31.176785   51251 logs.go:282] 0 containers: []
	W1018 17:44:31.176793   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:31.176800   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:31.176894   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:31.209000   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:31.209022   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:31.209027   51251 cri.go:89] found id: ""
	I1018 17:44:31.209034   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:31.209092   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:31.213702   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:31.217030   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:31.217134   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:31.244577   51251 cri.go:89] found id: ""
	I1018 17:44:31.244600   51251 logs.go:282] 0 containers: []
	W1018 17:44:31.244608   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:31.244615   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:31.244694   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:31.276009   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:31.276030   51251 cri.go:89] found id: ""
	I1018 17:44:31.276037   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:31.276126   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:31.279948   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:31.280039   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:31.312074   51251 cri.go:89] found id: ""
	I1018 17:44:31.312098   51251 logs.go:282] 0 containers: []
	W1018 17:44:31.312108   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:31.312117   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:31.312146   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:31.374723   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:31.374758   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:31.402419   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:31.402446   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:31.430538   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:31.430564   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:31.512803   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:31.512837   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:31.614079   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:31.614114   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:31.681910   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:31.673049    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.673806    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.675573    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.676196    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.677982    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:31.673049    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.673806    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.675573    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.676196    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:31.677982    6314 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:31.681935   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:31.681956   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:31.707698   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:31.707730   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:31.744929   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:31.745030   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:31.776082   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:31.776119   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:31.788990   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:31.789026   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:34.355514   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:34.366625   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:34.366689   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:34.394220   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:34.394241   51251 cri.go:89] found id: ""
	I1018 17:44:34.394249   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:34.394307   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:34.398229   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:34.398301   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:34.428966   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:34.428987   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:34.428991   51251 cri.go:89] found id: ""
	I1018 17:44:34.428999   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:34.429056   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:34.438000   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:34.443562   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:34.443638   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:34.470520   51251 cri.go:89] found id: ""
	I1018 17:44:34.470583   51251 logs.go:282] 0 containers: []
	W1018 17:44:34.470596   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:34.470603   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:34.470660   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:34.498015   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:34.498035   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:34.498040   51251 cri.go:89] found id: ""
	I1018 17:44:34.498047   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:34.498107   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:34.501820   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:34.505392   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:34.505508   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:34.531261   51251 cri.go:89] found id: ""
	I1018 17:44:34.531285   51251 logs.go:282] 0 containers: []
	W1018 17:44:34.531294   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:34.531301   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:34.531391   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:34.558417   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:34.558439   51251 cri.go:89] found id: ""
	I1018 17:44:34.558448   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:34.558506   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:34.562283   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:34.562397   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:34.589239   51251 cri.go:89] found id: ""
	I1018 17:44:34.589263   51251 logs.go:282] 0 containers: []
	W1018 17:44:34.589271   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:34.589280   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:34.589321   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:34.639508   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:34.639543   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:34.704073   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:34.704111   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:34.730079   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:34.730105   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:34.812757   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:34.812794   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:34.844323   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:34.844351   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:34.870994   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:34.871020   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:34.909712   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:34.909738   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:34.949435   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:34.949461   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:35.051363   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:35.051403   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:35.064297   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:35.064324   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:35.143040   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:35.134155    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.134888    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.136750    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.137513    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.139182    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:35.134155    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.134888    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.136750    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.137513    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:35.139182    6490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:37.644402   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:37.655473   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:37.655556   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:37.686712   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:37.686743   51251 cri.go:89] found id: ""
	I1018 17:44:37.686753   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:37.686818   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:37.690705   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:37.690780   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:37.717269   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:37.717288   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:37.717293   51251 cri.go:89] found id: ""
	I1018 17:44:37.717300   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:37.717365   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:37.721019   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:37.724434   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:37.724511   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:37.751507   51251 cri.go:89] found id: ""
	I1018 17:44:37.751529   51251 logs.go:282] 0 containers: []
	W1018 17:44:37.751548   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:37.751554   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:37.751612   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:37.780532   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:37.780550   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:37.780555   51251 cri.go:89] found id: ""
	I1018 17:44:37.780562   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:37.780620   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:37.784463   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:37.789038   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:37.789127   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:37.827207   51251 cri.go:89] found id: ""
	I1018 17:44:37.827234   51251 logs.go:282] 0 containers: []
	W1018 17:44:37.827243   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:37.827250   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:37.827328   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:37.854900   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:37.854962   51251 cri.go:89] found id: ""
	I1018 17:44:37.854986   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:37.855062   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:37.859902   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:37.859977   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:37.886300   51251 cri.go:89] found id: ""
	I1018 17:44:37.886365   51251 logs.go:282] 0 containers: []
	W1018 17:44:37.886388   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:37.886409   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:37.886446   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:37.984179   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:37.984212   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:38.054964   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:38.045702    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.046390    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.048099    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.048652    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.050343    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:38.045702    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.046390    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.048099    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.048652    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:38.050343    6560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:38.054994   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:38.055010   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:38.084660   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:38.084691   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:38.124518   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:38.124606   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:38.190852   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:38.190893   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:38.273991   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:38.274027   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:38.286517   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:38.286546   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:38.338543   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:38.338580   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:38.367716   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:38.367745   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:38.401155   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:38.401184   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:40.943389   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:40.954255   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:40.954330   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:40.990505   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:40.990526   51251 cri.go:89] found id: ""
	I1018 17:44:40.990535   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:40.990591   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:40.994301   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:40.994374   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:41.024101   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:41.024123   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:41.024128   51251 cri.go:89] found id: ""
	I1018 17:44:41.024135   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:41.024202   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:41.028135   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:41.031764   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:41.031846   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:41.058027   51251 cri.go:89] found id: ""
	I1018 17:44:41.058110   51251 logs.go:282] 0 containers: []
	W1018 17:44:41.058133   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:41.058154   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:41.058241   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:41.084363   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:41.084429   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:41.084447   51251 cri.go:89] found id: ""
	I1018 17:44:41.084468   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:41.084549   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:41.088275   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:41.091806   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:41.091872   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:41.119266   51251 cri.go:89] found id: ""
	I1018 17:44:41.119288   51251 logs.go:282] 0 containers: []
	W1018 17:44:41.119296   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:41.119302   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:41.119364   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:41.152142   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:41.152162   51251 cri.go:89] found id: ""
	I1018 17:44:41.152171   51251 logs.go:282] 1 containers: [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:41.152233   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:41.155967   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:41.156039   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:41.183430   51251 cri.go:89] found id: ""
	I1018 17:44:41.183453   51251 logs.go:282] 0 containers: []
	W1018 17:44:41.183461   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:41.183470   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:41.183481   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:41.217575   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:41.217599   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:41.314633   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:41.314667   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:41.383386   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:41.373451    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.374006    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.375984    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.377691    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.379407    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:41.373451    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.374006    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.375984    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.377691    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:41.379407    6704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:41.383406   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:41.383419   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:41.446018   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:41.446089   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:41.488303   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:41.488335   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:41.520983   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:41.521012   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:41.604693   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:41.604726   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:41.638240   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:41.638266   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:41.649462   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:41.649486   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:41.674875   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:41.674902   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:44.238248   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:44.255175   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:44.255240   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:44.287509   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:44.287527   51251 cri.go:89] found id: ""
	I1018 17:44:44.287535   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:44.287592   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.292053   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:44.292125   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:44.323105   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:44.323123   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:44.323128   51251 cri.go:89] found id: ""
	I1018 17:44:44.323135   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:44.323191   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.327287   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.331002   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:44.331110   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:44.362329   51251 cri.go:89] found id: ""
	I1018 17:44:44.362393   51251 logs.go:282] 0 containers: []
	W1018 17:44:44.362415   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:44.362436   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:44.362517   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:44.393314   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:44.393384   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:44.393403   51251 cri.go:89] found id: ""
	I1018 17:44:44.393432   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:44.393510   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.397610   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.401568   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:44.401674   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:44.439288   51251 cri.go:89] found id: ""
	I1018 17:44:44.439350   51251 logs.go:282] 0 containers: []
	W1018 17:44:44.439370   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:44.439391   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:44.439473   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:44.477857   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:44.477920   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:44.477939   51251 cri.go:89] found id: ""
	I1018 17:44:44.477960   51251 logs.go:282] 2 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:44.478038   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.482903   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:44.487434   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:44.487551   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:44.527686   51251 cri.go:89] found id: ""
	I1018 17:44:44.527761   51251 logs.go:282] 0 containers: []
	W1018 17:44:44.527784   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:44.527823   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:44.527850   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:44.637841   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:44.637917   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:44.653818   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:44.653846   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:44.762008   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:44.751907    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.753161    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.755038    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.755967    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.757158    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:44.751907    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.753161    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.755038    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.755967    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:44.757158    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:44.762038   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:44.762067   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:44.798868   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:44.798900   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:44.850591   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:44.850634   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:44.938420   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:44:44.938472   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:44.980294   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:44.980372   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:45.089048   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:45.089096   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:45.196420   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:45.196522   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:45.246623   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:45.246803   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:45.295911   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:45.295955   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:47.851142   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:47.862455   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:47.862520   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:47.888902   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:47.888970   51251 cri.go:89] found id: ""
	I1018 17:44:47.888984   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:47.889042   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:47.893115   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:47.893208   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:47.923068   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:47.923087   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:47.923091   51251 cri.go:89] found id: ""
	I1018 17:44:47.923099   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:47.923170   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:47.927351   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:47.931468   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:47.931541   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:47.958620   51251 cri.go:89] found id: ""
	I1018 17:44:47.958642   51251 logs.go:282] 0 containers: []
	W1018 17:44:47.958651   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:47.958657   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:47.958717   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:47.988421   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:47.988494   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:47.988514   51251 cri.go:89] found id: ""
	I1018 17:44:47.988534   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:47.988616   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:47.992743   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:47.996667   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:47.996742   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:48.025533   51251 cri.go:89] found id: ""
	I1018 17:44:48.025560   51251 logs.go:282] 0 containers: []
	W1018 17:44:48.025568   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:48.025575   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:48.025654   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:48.053974   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:48.053997   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:48.054002   51251 cri.go:89] found id: ""
	I1018 17:44:48.054009   51251 logs.go:282] 2 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:48.054070   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:48.057945   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:48.061877   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:48.061953   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:48.090761   51251 cri.go:89] found id: ""
	I1018 17:44:48.090786   51251 logs.go:282] 0 containers: []
	W1018 17:44:48.090795   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:48.090805   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:48.090817   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:48.189723   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:48.189756   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:48.221709   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:48.221739   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:48.259440   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:48.259470   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:48.345516   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:48.345553   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:48.374446   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:48.374477   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:48.460806   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:48.460842   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:48.473713   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:48.473739   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:48.554183   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:48.545515    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.546813    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.547313    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.548898    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.549566    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:48.545515    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.546813    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.547313    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.548898    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:48.549566    7023 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:48.554204   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:48.554217   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:48.609158   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:44:48.609190   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:48.636984   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:48.637062   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:48.664743   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:48.664822   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:51.198411   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:51.210016   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:51.210081   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:51.236981   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:51.237004   51251 cri.go:89] found id: ""
	I1018 17:44:51.237012   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:51.237077   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.240676   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:51.240750   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:51.269356   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:51.269382   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:51.269387   51251 cri.go:89] found id: ""
	I1018 17:44:51.269395   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:51.269453   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.273122   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.277060   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:51.277132   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:51.304766   51251 cri.go:89] found id: ""
	I1018 17:44:51.304790   51251 logs.go:282] 0 containers: []
	W1018 17:44:51.304799   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:51.304805   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:51.304865   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:51.332379   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:51.332401   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:51.332406   51251 cri.go:89] found id: ""
	I1018 17:44:51.332414   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:51.332474   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.336518   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.341898   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:51.341976   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:51.367678   51251 cri.go:89] found id: ""
	I1018 17:44:51.367708   51251 logs.go:282] 0 containers: []
	W1018 17:44:51.367726   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:51.367732   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:51.367796   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:51.394153   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:51.394175   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:51.394180   51251 cri.go:89] found id: ""
	I1018 17:44:51.394187   51251 logs.go:282] 2 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:51.394243   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.397993   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:51.401471   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:51.401578   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:51.428758   51251 cri.go:89] found id: ""
	I1018 17:44:51.428822   51251 logs.go:282] 0 containers: []
	W1018 17:44:51.428844   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:51.428870   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:51.428894   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:51.503688   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:51.495917    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.496423    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.498141    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.498547    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.500003    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:51.495917    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.496423    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.498141    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.498547    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:51.500003    7130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:51.503709   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:51.503722   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:51.532853   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:51.532878   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:51.596823   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:44:51.596858   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:51.623499   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:51.623527   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:51.653511   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:51.653538   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:51.743235   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:51.743280   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:51.775603   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:51.775632   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:51.875854   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:51.875890   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:51.893446   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:51.893471   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:51.928284   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:51.928316   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:51.997158   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:51.997193   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:54.531254   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:54.544073   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:54.544143   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:54.572505   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:54.572526   51251 cri.go:89] found id: ""
	I1018 17:44:54.572534   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:54.572589   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.576276   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:54.576349   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:54.608530   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:54.608552   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:54.608557   51251 cri.go:89] found id: ""
	I1018 17:44:54.608564   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:54.608620   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.612802   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.616507   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:54.616574   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:54.646887   51251 cri.go:89] found id: ""
	I1018 17:44:54.646909   51251 logs.go:282] 0 containers: []
	W1018 17:44:54.646918   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:54.646924   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:54.646985   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:54.673624   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:54.673641   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:54.673646   51251 cri.go:89] found id: ""
	I1018 17:44:54.673653   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:54.673708   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.677580   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.680915   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:54.681039   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:54.707856   51251 cri.go:89] found id: ""
	I1018 17:44:54.707882   51251 logs.go:282] 0 containers: []
	W1018 17:44:54.707890   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:54.707897   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:54.707985   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:54.739572   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:54.739596   51251 cri.go:89] found id: "cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:54.739602   51251 cri.go:89] found id: ""
	I1018 17:44:54.739609   51251 logs.go:282] 2 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce]
	I1018 17:44:54.739666   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.744278   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:54.747740   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:54.747812   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:54.786379   51251 cri.go:89] found id: ""
	I1018 17:44:54.786405   51251 logs.go:282] 0 containers: []
	W1018 17:44:54.786413   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:54.786423   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:54.786435   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:54.850541   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:54.850577   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:54.878112   51251 logs.go:123] Gathering logs for kube-controller-manager [cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce] ...
	I1018 17:44:54.878139   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc58fb68ffaf24b46d876e8762d3fd7e982c2be487fbf96410c180b75f49dcce"
	I1018 17:44:54.905434   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:54.905462   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:54.983610   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:54.974914    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.975800    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.977585    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.978207    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.979920    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:54.974914    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.975800    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.977585    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.978207    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:54.979920    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:54.983631   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:44:54.983643   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:55.018119   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:55.018148   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:55.096411   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:55.096446   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:55.134900   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:55.134926   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:55.237181   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:55.237214   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:55.250828   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:55.250858   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:55.281899   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:55.281928   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:55.339174   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:55.339208   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:57.880428   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:44:57.891159   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:44:57.891231   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:44:57.921966   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:57.921988   51251 cri.go:89] found id: ""
	I1018 17:44:57.921996   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:44:57.922051   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:57.925877   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:44:57.925946   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:44:57.983701   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:57.983719   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:57.983724   51251 cri.go:89] found id: ""
	I1018 17:44:57.983731   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:44:57.983785   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:57.988147   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:57.991948   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:44:57.992055   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:44:58.027455   51251 cri.go:89] found id: ""
	I1018 17:44:58.027489   51251 logs.go:282] 0 containers: []
	W1018 17:44:58.027498   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:44:58.027504   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:44:58.027572   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:44:58.061874   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:58.061896   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:58.061902   51251 cri.go:89] found id: ""
	I1018 17:44:58.061911   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:44:58.061971   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:58.065752   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:58.069525   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:44:58.069600   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:44:58.099676   51251 cri.go:89] found id: ""
	I1018 17:44:58.099698   51251 logs.go:282] 0 containers: []
	W1018 17:44:58.099707   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:44:58.099720   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:44:58.099778   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:44:58.132718   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:44:58.132740   51251 cri.go:89] found id: ""
	I1018 17:44:58.132748   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:44:58.132803   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:44:58.136641   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:44:58.136718   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:44:58.161767   51251 cri.go:89] found id: ""
	I1018 17:44:58.161791   51251 logs.go:282] 0 containers: []
	W1018 17:44:58.161799   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:44:58.161808   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:44:58.161820   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:44:58.239848   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:44:58.231755    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.232488    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.234323    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.234970    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.236249    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:44:58.231755    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.232488    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.234323    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.234970    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:44:58.236249    7434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:44:58.239867   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:44:58.239879   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:44:58.265229   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:44:58.265253   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:44:58.316459   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:44:58.316495   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:44:58.382736   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:44:58.382771   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:44:58.461400   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:44:58.461435   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:44:58.496880   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:44:58.496905   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:44:58.600326   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:44:58.600360   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:44:58.612833   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:44:58.612860   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:44:58.652792   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:44:58.652823   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:44:58.683598   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:44:58.683624   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:01.209276   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:01.221741   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:01.221825   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:01.255998   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:01.256020   51251 cri.go:89] found id: ""
	I1018 17:45:01.256029   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:01.256090   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:01.260323   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:01.260410   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:01.290623   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:01.290646   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:01.290652   51251 cri.go:89] found id: ""
	I1018 17:45:01.290660   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:01.290722   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:01.294923   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:01.299340   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:01.299421   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:01.328205   51251 cri.go:89] found id: ""
	I1018 17:45:01.328234   51251 logs.go:282] 0 containers: []
	W1018 17:45:01.328244   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:01.328251   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:01.328321   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:01.360099   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:01.360123   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:01.360128   51251 cri.go:89] found id: ""
	I1018 17:45:01.360136   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:01.360209   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:01.364283   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:01.368572   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:01.368657   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:01.397092   51251 cri.go:89] found id: ""
	I1018 17:45:01.397161   51251 logs.go:282] 0 containers: []
	W1018 17:45:01.397184   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:01.397207   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:01.397297   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:01.426452   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:01.426520   51251 cri.go:89] found id: ""
	I1018 17:45:01.426537   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:01.426623   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:01.430959   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:01.431090   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:01.460044   51251 cri.go:89] found id: ""
	I1018 17:45:01.460085   51251 logs.go:282] 0 containers: []
	W1018 17:45:01.460095   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:01.460126   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:01.460171   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:01.536047   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:01.536083   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:01.548838   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:01.548870   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:01.581436   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:01.581464   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:01.639347   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:01.639384   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:01.667540   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:01.667571   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:01.714304   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:01.714330   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:01.813430   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:01.813510   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:01.882898   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:01.873459    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.874354    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.876306    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.877166    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.878779    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:01.873459    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.874354    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.876306    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.877166    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:01.878779    7615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:01.882921   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:01.882937   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:01.917303   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:01.917407   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:01.999403   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:01.999445   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:04.533522   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:04.544111   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:04.544187   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:04.570770   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:04.570840   51251 cri.go:89] found id: ""
	I1018 17:45:04.570855   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:04.570912   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:04.575103   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:04.575198   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:04.609501   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:04.609532   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:04.609537   51251 cri.go:89] found id: ""
	I1018 17:45:04.609545   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:04.609600   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:04.613955   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:04.617439   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:04.617516   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:04.645280   51251 cri.go:89] found id: ""
	I1018 17:45:04.645306   51251 logs.go:282] 0 containers: []
	W1018 17:45:04.645315   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:04.645324   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:04.645392   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:04.672130   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:04.672153   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:04.672158   51251 cri.go:89] found id: ""
	I1018 17:45:04.672167   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:04.672223   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:04.676297   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:04.681021   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:04.681099   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:04.707420   51251 cri.go:89] found id: ""
	I1018 17:45:04.707444   51251 logs.go:282] 0 containers: []
	W1018 17:45:04.707452   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:04.707461   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:04.707517   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:04.737533   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:04.737555   51251 cri.go:89] found id: ""
	I1018 17:45:04.737565   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:04.737631   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:04.741271   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:04.741342   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:04.767657   51251 cri.go:89] found id: ""
	I1018 17:45:04.767681   51251 logs.go:282] 0 containers: []
	W1018 17:45:04.767689   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:04.767699   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:04.767710   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:04.863553   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:04.863587   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:04.875569   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:04.875600   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:04.930436   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:04.930476   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:04.969240   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:04.969276   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:05.039302   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:05.039336   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:05.067077   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:05.067103   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:05.148387   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:05.148422   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:05.223337   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:05.215470    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.216065    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.217641    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.218213    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.219737    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:05.215470    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.216065    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.217641    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.218213    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:05.219737    7749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:05.223369   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:05.223382   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:05.249066   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:05.249091   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:05.280440   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:05.280465   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:07.817192   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:07.827427   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:07.827497   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:07.853178   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:07.853198   51251 cri.go:89] found id: ""
	I1018 17:45:07.853206   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:07.853261   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:07.857004   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:07.857072   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:07.882619   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:07.882640   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:07.882645   51251 cri.go:89] found id: ""
	I1018 17:45:07.882652   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:07.882716   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:07.886518   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:07.890146   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:07.890220   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:07.917313   51251 cri.go:89] found id: ""
	I1018 17:45:07.917338   51251 logs.go:282] 0 containers: []
	W1018 17:45:07.917351   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:07.917358   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:07.917421   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:07.950191   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:07.950218   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:07.950223   51251 cri.go:89] found id: ""
	I1018 17:45:07.950234   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:07.950304   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:07.953933   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:07.957694   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:07.957770   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:07.990144   51251 cri.go:89] found id: ""
	I1018 17:45:07.990167   51251 logs.go:282] 0 containers: []
	W1018 17:45:07.990176   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:07.990183   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:07.990240   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:08.023638   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:08.023660   51251 cri.go:89] found id: ""
	I1018 17:45:08.023669   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:08.023729   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:08.028231   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:08.028307   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:08.056653   51251 cri.go:89] found id: ""
	I1018 17:45:08.056678   51251 logs.go:282] 0 containers: []
	W1018 17:45:08.056687   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:08.056696   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:08.056708   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:08.132641   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:08.122188    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.122913    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.124506    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.124806    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.126307    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:08.122188    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.122913    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.124506    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.124806    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:08.126307    7845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:08.132662   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:08.132677   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:08.197105   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:08.197143   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:08.238131   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:08.238157   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:08.266672   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:08.266701   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:08.302562   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:08.302587   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:08.411059   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:08.411103   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:08.423232   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:08.423261   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:08.449524   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:08.449549   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:08.505779   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:08.505811   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:08.540674   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:08.540708   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:11.118218   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:11.130399   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:11.130521   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:11.164618   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:11.164637   51251 cri.go:89] found id: ""
	I1018 17:45:11.164644   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:11.164700   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:11.168380   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:11.168453   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:11.195034   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:11.195059   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:11.195065   51251 cri.go:89] found id: ""
	I1018 17:45:11.195072   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:11.195126   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:11.199134   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:11.203492   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:11.203557   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:11.230659   51251 cri.go:89] found id: ""
	I1018 17:45:11.230681   51251 logs.go:282] 0 containers: []
	W1018 17:45:11.230689   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:11.230697   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:11.230773   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:11.256814   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:11.256842   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:11.256847   51251 cri.go:89] found id: ""
	I1018 17:45:11.256855   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:11.256973   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:11.260554   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:11.263940   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:11.264009   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:11.289036   51251 cri.go:89] found id: ""
	I1018 17:45:11.289114   51251 logs.go:282] 0 containers: []
	W1018 17:45:11.289128   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:11.289134   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:11.289192   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:11.320844   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:11.320867   51251 cri.go:89] found id: ""
	I1018 17:45:11.320875   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:11.320928   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:11.324471   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:11.324537   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:11.350002   51251 cri.go:89] found id: ""
	I1018 17:45:11.350028   51251 logs.go:282] 0 containers: []
	W1018 17:45:11.350036   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:11.350045   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:11.350057   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:11.415699   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:11.407276    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.408085    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.409925    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.410627    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.412208    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:11.407276    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.408085    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.409925    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.410627    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:11.412208    7984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:11.415719   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:11.415732   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:11.467144   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:11.467178   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:11.500116   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:11.500149   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:11.565053   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:11.565083   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:11.594806   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:11.594833   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:11.621385   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:11.621416   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:11.649391   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:11.649418   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:11.681270   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:11.681294   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:11.758017   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:11.758049   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:11.856363   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:11.856394   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:14.369690   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:14.380482   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:14.380582   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:14.406908   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:14.406929   51251 cri.go:89] found id: ""
	I1018 17:45:14.406937   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:14.406991   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:14.410922   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:14.410995   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:14.438715   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:14.438787   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:14.438805   51251 cri.go:89] found id: ""
	I1018 17:45:14.438825   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:14.438910   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:14.442634   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:14.446455   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:14.446583   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:14.472662   51251 cri.go:89] found id: ""
	I1018 17:45:14.472729   51251 logs.go:282] 0 containers: []
	W1018 17:45:14.472740   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:14.472749   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:14.472837   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:14.499722   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:14.499787   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:14.499804   51251 cri.go:89] found id: ""
	I1018 17:45:14.499826   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:14.499910   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:14.503638   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:14.507247   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:14.507364   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:14.534947   51251 cri.go:89] found id: ""
	I1018 17:45:14.534973   51251 logs.go:282] 0 containers: []
	W1018 17:45:14.534981   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:14.534987   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:14.535064   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:14.561664   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:14.561686   51251 cri.go:89] found id: ""
	I1018 17:45:14.561695   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:14.561753   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:14.565710   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:14.565806   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:14.595947   51251 cri.go:89] found id: ""
	I1018 17:45:14.595972   51251 logs.go:282] 0 containers: []
	W1018 17:45:14.595980   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:14.595990   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:14.596029   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:14.671772   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:14.671807   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:14.775531   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:14.775566   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:14.787782   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:14.787811   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:14.819786   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:14.819816   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:14.851924   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:14.851951   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:14.920046   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:14.911958    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.912762    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.914424    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.914744    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.916204    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:14.911958    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.912762    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.914424    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.914744    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:14.916204    8150 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:14.920119   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:14.920139   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:14.977739   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:14.977775   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:15.032058   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:15.032091   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:15.102494   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:15.102529   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:15.138731   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:15.138757   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:17.666030   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:17.676690   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:17.676760   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:17.703559   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:17.703578   51251 cri.go:89] found id: ""
	I1018 17:45:17.703585   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:17.703638   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:17.707859   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:17.707930   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:17.735399   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:17.735422   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:17.735433   51251 cri.go:89] found id: ""
	I1018 17:45:17.735441   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:17.735498   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:17.739407   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:17.742711   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:17.742782   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:17.773860   51251 cri.go:89] found id: ""
	I1018 17:45:17.773930   51251 logs.go:282] 0 containers: []
	W1018 17:45:17.773946   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:17.773953   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:17.774014   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:17.800989   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:17.801015   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:17.801021   51251 cri.go:89] found id: ""
	I1018 17:45:17.801028   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:17.801094   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:17.805064   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:17.808714   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:17.808845   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:17.835041   51251 cri.go:89] found id: ""
	I1018 17:45:17.835065   51251 logs.go:282] 0 containers: []
	W1018 17:45:17.835073   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:17.835080   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:17.835141   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:17.866314   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:17.866337   51251 cri.go:89] found id: ""
	I1018 17:45:17.866345   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:17.866406   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:17.870038   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:17.870110   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:17.895894   51251 cri.go:89] found id: ""
	I1018 17:45:17.895916   51251 logs.go:282] 0 containers: []
	W1018 17:45:17.895925   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:17.895934   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:17.895945   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:17.998692   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:17.998766   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:18.015153   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:18.015182   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:18.068223   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:18.068259   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:18.154314   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:18.154356   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:18.243477   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:18.234737    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.235447    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.237270    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.237840    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.239403    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:18.234737    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.235447    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.237270    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.237840    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:18.239403    8277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:18.243497   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:18.243509   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:18.275940   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:18.275970   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:18.316930   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:18.316995   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:18.389081   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:18.389116   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:18.418930   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:18.418956   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:18.449161   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:18.449188   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:20.980259   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:20.991356   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:20.991427   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:21.028373   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:21.028396   51251 cri.go:89] found id: ""
	I1018 17:45:21.028404   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:21.028462   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:21.031989   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:21.032060   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:21.061105   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:21.061126   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:21.061138   51251 cri.go:89] found id: ""
	I1018 17:45:21.061147   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:21.061206   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:21.064983   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:21.068555   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:21.068622   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:21.095318   51251 cri.go:89] found id: ""
	I1018 17:45:21.095340   51251 logs.go:282] 0 containers: []
	W1018 17:45:21.095348   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:21.095354   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:21.095410   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:21.132132   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:21.132167   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:21.132172   51251 cri.go:89] found id: ""
	I1018 17:45:21.132195   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:21.132278   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:21.136778   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:21.140214   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:21.140288   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:21.172583   51251 cri.go:89] found id: ""
	I1018 17:45:21.172605   51251 logs.go:282] 0 containers: []
	W1018 17:45:21.172614   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:21.172620   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:21.172675   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:21.203092   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:21.203113   51251 cri.go:89] found id: ""
	I1018 17:45:21.203121   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:21.203176   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:21.207592   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:21.207657   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:21.235546   51251 cri.go:89] found id: ""
	I1018 17:45:21.235570   51251 logs.go:282] 0 containers: []
	W1018 17:45:21.235580   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:21.235589   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:21.235635   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:21.332614   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:21.332652   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:21.360929   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:21.361068   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:21.401211   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:21.401249   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:21.468558   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:21.468594   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:21.498171   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:21.498196   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:21.576112   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:21.576147   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:21.607742   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:21.607775   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:21.619918   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:21.619943   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:21.687350   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:21.679038    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.679743    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.681303    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.681885    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.683555    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:21.679038    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.679743    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.681303    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.681885    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:21.683555    8451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:21.687371   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:21.687384   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:21.742021   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:21.742057   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:24.270296   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:24.281336   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:24.281412   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:24.310155   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:24.310176   51251 cri.go:89] found id: ""
	I1018 17:45:24.310184   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:24.310236   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:24.314848   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:24.314949   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:24.343101   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:24.343140   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:24.343146   51251 cri.go:89] found id: ""
	I1018 17:45:24.343154   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:24.343214   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:24.347137   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:24.350301   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:24.350364   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:24.375739   51251 cri.go:89] found id: ""
	I1018 17:45:24.375763   51251 logs.go:282] 0 containers: []
	W1018 17:45:24.375774   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:24.375787   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:24.375845   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:24.414912   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:24.414933   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:24.414944   51251 cri.go:89] found id: ""
	I1018 17:45:24.414952   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:24.415006   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:24.419585   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:24.423104   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:24.423211   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:24.449615   51251 cri.go:89] found id: ""
	I1018 17:45:24.449639   51251 logs.go:282] 0 containers: []
	W1018 17:45:24.449647   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:24.449653   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:24.449709   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:24.476036   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:24.476057   51251 cri.go:89] found id: ""
	I1018 17:45:24.476065   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:24.476126   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:24.479757   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:24.479825   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:24.512386   51251 cri.go:89] found id: ""
	I1018 17:45:24.512409   51251 logs.go:282] 0 containers: []
	W1018 17:45:24.512417   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:24.512426   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:24.512438   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:24.538617   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:24.538645   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:24.592949   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:24.592984   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:24.621215   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:24.621242   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:24.697575   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:24.697611   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:24.769130   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:24.760873    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.761713    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.763257    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.763723    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.765324    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:24.760873    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.761713    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.763257    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.763723    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:24.765324    8565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:24.769206   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:24.769228   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:24.807477   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:24.807508   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:24.880464   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:24.880506   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:24.913114   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:24.913140   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:24.946306   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:24.946335   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:25.051970   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:25.052004   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:27.565286   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:27.576658   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:27.576726   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:27.613181   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:27.613202   51251 cri.go:89] found id: ""
	I1018 17:45:27.613210   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:27.613264   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:27.617394   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:27.617462   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:27.645391   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:27.645413   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:27.645418   51251 cri.go:89] found id: ""
	I1018 17:45:27.645426   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:27.645494   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:27.649249   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:27.652792   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:27.652866   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:27.679303   51251 cri.go:89] found id: ""
	I1018 17:45:27.679368   51251 logs.go:282] 0 containers: []
	W1018 17:45:27.679390   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:27.679408   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:27.679492   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:27.705387   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:27.705453   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:27.705466   51251 cri.go:89] found id: ""
	I1018 17:45:27.705475   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:27.705532   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:27.709305   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:27.713679   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:27.713761   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:27.740178   51251 cri.go:89] found id: ""
	I1018 17:45:27.740203   51251 logs.go:282] 0 containers: []
	W1018 17:45:27.740211   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:27.740218   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:27.740277   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:27.768320   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:27.768342   51251 cri.go:89] found id: ""
	I1018 17:45:27.768351   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:27.768416   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:27.772360   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:27.772471   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:27.797997   51251 cri.go:89] found id: ""
	I1018 17:45:27.798018   51251 logs.go:282] 0 containers: []
	W1018 17:45:27.798026   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:27.798049   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:27.798061   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:27.824302   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:27.824379   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:27.859099   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:27.859131   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:27.889803   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:27.889830   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:27.902196   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:27.902221   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:27.958924   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:27.958960   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:28.038453   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:28.038489   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:28.067717   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:28.067748   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:28.156959   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:28.156998   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:28.189533   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:28.189561   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:28.296814   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:28.296848   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:28.370306   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:28.360661    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.362171    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.362714    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.364316    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.364866    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:28.360661    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.362171    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.362714    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.364316    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:28.364866    8742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:30.870515   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:30.881788   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:30.881863   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:30.910070   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:30.910091   51251 cri.go:89] found id: ""
	I1018 17:45:30.910099   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:30.910154   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:30.914699   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:30.914767   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:30.944925   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:30.944970   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:30.944975   51251 cri.go:89] found id: ""
	I1018 17:45:30.944982   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:30.945037   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:30.948747   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:30.954312   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:30.954375   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:30.992317   51251 cri.go:89] found id: ""
	I1018 17:45:30.992339   51251 logs.go:282] 0 containers: []
	W1018 17:45:30.992347   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:30.992353   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:30.992409   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:31.020830   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:31.020849   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:31.020853   51251 cri.go:89] found id: ""
	I1018 17:45:31.020860   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:31.020918   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:31.025302   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:31.028979   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:31.029048   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:31.066137   51251 cri.go:89] found id: ""
	I1018 17:45:31.066238   51251 logs.go:282] 0 containers: []
	W1018 17:45:31.066262   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:31.066295   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:31.066401   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:31.093628   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:31.093651   51251 cri.go:89] found id: ""
	I1018 17:45:31.093659   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:31.093747   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:31.097751   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:31.097830   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:31.126496   51251 cri.go:89] found id: ""
	I1018 17:45:31.126517   51251 logs.go:282] 0 containers: []
	W1018 17:45:31.126526   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:31.126535   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:31.126547   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:31.199157   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:31.190529    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.191738    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.193086    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.193754    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.195583    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:31.190529    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.191738    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.193086    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.193754    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:31.195583    8811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:31.199180   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:31.199192   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:31.227645   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:31.227672   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:31.299176   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:31.299211   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:31.331846   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:31.331870   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:31.408603   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:31.408637   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:31.443678   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:31.443708   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:31.543336   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:31.543370   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:31.584237   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:31.584267   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:31.657778   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:31.657815   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:31.687304   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:31.687331   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:34.200278   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:34.213848   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:34.213915   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:34.240838   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:34.240860   51251 cri.go:89] found id: ""
	I1018 17:45:34.240874   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:34.240930   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:34.244825   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:34.244901   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:34.271020   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:34.271040   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:34.271044   51251 cri.go:89] found id: ""
	I1018 17:45:34.271052   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:34.271106   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:34.274974   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:34.278648   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:34.278748   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:34.306959   51251 cri.go:89] found id: ""
	I1018 17:45:34.306980   51251 logs.go:282] 0 containers: []
	W1018 17:45:34.306988   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:34.307023   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:34.307092   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:34.332551   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:34.332573   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:34.332578   51251 cri.go:89] found id: ""
	I1018 17:45:34.332585   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:34.332641   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:34.336514   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:34.340414   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:34.340491   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:34.366530   51251 cri.go:89] found id: ""
	I1018 17:45:34.366556   51251 logs.go:282] 0 containers: []
	W1018 17:45:34.366566   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:34.366572   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:34.366633   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:34.393555   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:34.393573   51251 cri.go:89] found id: ""
	I1018 17:45:34.393581   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:34.393637   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:34.397566   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:34.397635   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:34.424542   51251 cri.go:89] found id: ""
	I1018 17:45:34.424566   51251 logs.go:282] 0 containers: []
	W1018 17:45:34.424575   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:34.424584   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:34.424595   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:34.436112   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:34.436137   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:34.507631   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:34.499819    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.500689    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.501741    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.502269    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.503964    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:34.499819    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.500689    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.501741    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.502269    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:34.503964    8951 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:34.507654   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:34.507666   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:34.562029   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:34.562062   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:34.599739   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:34.599770   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:34.628468   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:34.628493   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:34.702022   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:34.702053   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:34.731823   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:34.731851   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:34.830492   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:34.830526   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:34.860325   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:34.860350   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:34.928523   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:34.928564   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:37.460864   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:37.472124   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:37.472190   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:37.499832   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:37.499854   51251 cri.go:89] found id: ""
	I1018 17:45:37.499862   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:37.499920   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:37.503595   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:37.503663   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:37.531543   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:37.531563   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:37.531569   51251 cri.go:89] found id: ""
	I1018 17:45:37.531576   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:37.531630   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:37.535265   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:37.538643   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:37.538712   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:37.565328   51251 cri.go:89] found id: ""
	I1018 17:45:37.565359   51251 logs.go:282] 0 containers: []
	W1018 17:45:37.565368   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:37.565374   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:37.565434   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:37.602468   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:37.602489   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:37.602494   51251 cri.go:89] found id: ""
	I1018 17:45:37.602501   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:37.602557   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:37.606311   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:37.609849   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:37.609919   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:37.640018   51251 cri.go:89] found id: ""
	I1018 17:45:37.640087   51251 logs.go:282] 0 containers: []
	W1018 17:45:37.640110   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:37.640131   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:37.640216   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:37.666232   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:37.666305   51251 cri.go:89] found id: ""
	I1018 17:45:37.666334   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:37.666402   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:37.669826   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:37.669905   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:37.696068   51251 cri.go:89] found id: ""
	I1018 17:45:37.696104   51251 logs.go:282] 0 containers: []
	W1018 17:45:37.696112   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:37.696121   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:37.696158   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:37.767014   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:37.767049   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:37.799133   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:37.799158   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:37.883995   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:37.884029   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:37.919112   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:37.919145   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:37.968245   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:37.968269   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:38.008695   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:38.008740   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:38.109431   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:38.109506   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:38.124458   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:38.124529   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:38.217277   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:38.191743    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.192499    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.207164    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.208077    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.209702    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:38.191743    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.192499    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.207164    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.208077    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:38.209702    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:38.217297   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:38.217310   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:38.247001   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:38.247027   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:40.816985   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:40.827390   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:40.827474   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:40.854344   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:40.854363   51251 cri.go:89] found id: ""
	I1018 17:45:40.854371   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:40.854426   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:40.858780   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:40.858879   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:40.888649   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:40.888707   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:40.888726   51251 cri.go:89] found id: ""
	I1018 17:45:40.888754   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:40.888823   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:40.893141   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:40.897039   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:40.897111   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:40.930280   51251 cri.go:89] found id: ""
	I1018 17:45:40.930304   51251 logs.go:282] 0 containers: []
	W1018 17:45:40.930313   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:40.930319   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:40.930375   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:40.957741   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:40.957764   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:40.957769   51251 cri.go:89] found id: ""
	I1018 17:45:40.957777   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:40.957854   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:40.962938   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:40.967322   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:40.967388   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:40.995139   51251 cri.go:89] found id: ""
	I1018 17:45:40.995216   51251 logs.go:282] 0 containers: []
	W1018 17:45:40.995230   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:40.995237   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:40.995304   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:41.025259   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:41.025280   51251 cri.go:89] found id: ""
	I1018 17:45:41.025287   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:41.025344   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:41.029459   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:41.029553   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:41.055678   51251 cri.go:89] found id: ""
	I1018 17:45:41.055710   51251 logs.go:282] 0 containers: []
	W1018 17:45:41.055719   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:41.055728   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:41.055745   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:41.097365   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:41.097395   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:41.108644   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:41.108669   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:41.152656   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:41.152685   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:41.240199   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:41.240234   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:41.347931   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:41.347967   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:41.414489   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:41.405260    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.405872    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.407642    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.408232    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.410751    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:41.405260    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.405872    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.407642    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.408232    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:41.410751    9247 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:41.414511   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:41.414525   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:41.440777   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:41.440802   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:41.496567   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:41.496602   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:41.569402   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:41.569445   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:41.599116   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:41.599143   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:44.128092   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:44.139312   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:44.139380   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:44.166514   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:44.166533   51251 cri.go:89] found id: ""
	I1018 17:45:44.166541   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:44.166596   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:44.170245   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:44.170317   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:44.210379   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:44.210397   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:44.210402   51251 cri.go:89] found id: ""
	I1018 17:45:44.210410   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:44.210464   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:44.214239   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:44.217585   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:44.217650   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:44.242978   51251 cri.go:89] found id: ""
	I1018 17:45:44.243001   51251 logs.go:282] 0 containers: []
	W1018 17:45:44.243009   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:44.243016   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:44.243069   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:44.270660   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:44.270680   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:44.270685   51251 cri.go:89] found id: ""
	I1018 17:45:44.270692   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:44.270746   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:44.274435   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:44.278022   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:44.278090   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:44.314849   51251 cri.go:89] found id: ""
	I1018 17:45:44.314873   51251 logs.go:282] 0 containers: []
	W1018 17:45:44.314881   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:44.314887   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:44.314951   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:44.345002   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:44.345025   51251 cri.go:89] found id: ""
	I1018 17:45:44.345034   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:44.345091   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:44.348718   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:44.348785   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:44.373779   51251 cri.go:89] found id: ""
	I1018 17:45:44.373804   51251 logs.go:282] 0 containers: []
	W1018 17:45:44.373812   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:44.373828   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:44.373839   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:44.448448   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:44.448482   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:44.479822   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:44.479848   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:44.583615   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:44.583649   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:44.597191   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:44.597217   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:44.623357   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:44.623385   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:44.680939   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:44.680970   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:44.715142   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:44.715173   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:44.742106   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:44.742133   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:44.808539   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:44.799128    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.799968    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.801462    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.801790    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.803327    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:44.799128    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.799968    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.801462    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.801790    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:44.803327    9410 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:44.808609   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:44.808640   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:44.878644   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:44.878682   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:47.415612   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:47.426226   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:47.426291   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:47.453489   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:47.453509   51251 cri.go:89] found id: ""
	I1018 17:45:47.453517   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:47.453571   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:47.457326   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:47.457406   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:47.482854   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:47.482921   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:47.482931   51251 cri.go:89] found id: ""
	I1018 17:45:47.482939   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:47.482996   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:47.487182   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:47.490682   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:47.490788   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:47.518326   51251 cri.go:89] found id: ""
	I1018 17:45:47.518348   51251 logs.go:282] 0 containers: []
	W1018 17:45:47.518357   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:47.518364   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:47.518423   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:47.545707   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:47.545729   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:47.545734   51251 cri.go:89] found id: ""
	I1018 17:45:47.545742   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:47.545795   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:47.549377   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:47.552749   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:47.552816   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:47.578086   51251 cri.go:89] found id: ""
	I1018 17:45:47.578108   51251 logs.go:282] 0 containers: []
	W1018 17:45:47.578116   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:47.578122   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:47.578179   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:47.621041   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:47.621110   51251 cri.go:89] found id: ""
	I1018 17:45:47.621124   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:47.621185   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:47.624873   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:47.624982   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:47.651153   51251 cri.go:89] found id: ""
	I1018 17:45:47.651180   51251 logs.go:282] 0 containers: []
	W1018 17:45:47.651189   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:47.651198   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:47.651227   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:47.748488   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:47.748523   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:47.816047   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:47.807483    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.808149    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.809893    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.810874    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.812453    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:47.807483    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.808149    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.809893    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.810874    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:47.812453    9500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:47.816068   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:47.816080   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:47.845226   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:47.845251   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:47.898646   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:47.898681   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:47.939440   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:47.939471   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:47.973436   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:47.973499   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:48.008222   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:48.008264   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:48.022115   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:48.022146   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:48.101167   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:48.101270   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:48.133470   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:48.133539   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:50.714735   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:50.728888   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:50.729016   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:50.759926   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:50.759949   51251 cri.go:89] found id: ""
	I1018 17:45:50.759958   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:50.760018   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:50.764094   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:50.764177   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:50.790739   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:50.790770   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:50.790776   51251 cri.go:89] found id: ""
	I1018 17:45:50.790784   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:50.790848   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:50.794745   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:50.798617   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:50.798692   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:50.827817   51251 cri.go:89] found id: ""
	I1018 17:45:50.827854   51251 logs.go:282] 0 containers: []
	W1018 17:45:50.827863   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:50.827870   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:50.827952   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:50.856700   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:50.856719   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:50.856723   51251 cri.go:89] found id: ""
	I1018 17:45:50.856731   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:50.856784   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:50.860815   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:50.864675   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:50.864745   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:50.889856   51251 cri.go:89] found id: ""
	I1018 17:45:50.889881   51251 logs.go:282] 0 containers: []
	W1018 17:45:50.889889   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:50.889896   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:50.889976   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:50.918684   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:50.918708   51251 cri.go:89] found id: ""
	I1018 17:45:50.918716   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:50.918800   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:50.924460   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:50.924531   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:50.951436   51251 cri.go:89] found id: ""
	I1018 17:45:50.951457   51251 logs.go:282] 0 containers: []
	W1018 17:45:50.951465   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:50.951475   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:50.951491   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:50.967914   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:50.967945   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:51.025758   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:51.025791   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:51.076423   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:51.076458   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:51.107878   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:51.107909   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:51.140881   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:51.140910   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:51.218816   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:51.218847   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:51.285410   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:51.278013    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.278510    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.279993    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.280335    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.281812    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:51.278013    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.278510    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.279993    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.280335    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:51.281812    9675 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:51.285432   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:51.285444   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:51.314747   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:51.314775   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:51.388168   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:51.388242   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:51.424772   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:51.424801   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:54.026323   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:54.037679   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:54.037753   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:54.064502   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:54.064524   51251 cri.go:89] found id: ""
	I1018 17:45:54.064532   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:54.064585   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:54.068305   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:54.068376   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:54.097996   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:54.098018   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:54.098023   51251 cri.go:89] found id: ""
	I1018 17:45:54.098031   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:54.098085   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:54.102024   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:54.105866   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:54.105944   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:54.139891   51251 cri.go:89] found id: ""
	I1018 17:45:54.139915   51251 logs.go:282] 0 containers: []
	W1018 17:45:54.139924   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:54.139931   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:54.139986   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:54.166319   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:54.166343   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:54.166347   51251 cri.go:89] found id: ""
	I1018 17:45:54.166355   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:54.166420   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:54.170521   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:54.174527   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:54.174590   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:54.219178   51251 cri.go:89] found id: ""
	I1018 17:45:54.219212   51251 logs.go:282] 0 containers: []
	W1018 17:45:54.219220   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:54.219227   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:54.219283   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:54.246579   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:54.246602   51251 cri.go:89] found id: ""
	I1018 17:45:54.246610   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:54.246667   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:54.250546   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:54.250651   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:54.282408   51251 cri.go:89] found id: ""
	I1018 17:45:54.282432   51251 logs.go:282] 0 containers: []
	W1018 17:45:54.282440   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:54.282449   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:54.282460   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:54.367430   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:54.348041    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.348865    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.361407    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.362108    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.363737    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:54.348041    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.348865    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.361407    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.362108    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:54.363737    9774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:54.367454   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:54.367467   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:54.393831   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:54.393863   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:54.435123   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:54.435155   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:54.491144   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:54.491188   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:54.527193   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:54.527223   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:54.604327   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:54.604369   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:54.636282   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:54.636312   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:54.714664   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:54.714698   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:45:54.752480   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:54.752508   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:54.858349   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:54.858422   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:57.373300   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:45:57.384246   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:45:57.384335   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:45:57.415506   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:57.415571   51251 cri.go:89] found id: ""
	I1018 17:45:57.415595   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:45:57.415671   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:57.419389   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:45:57.419503   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:45:57.445186   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:57.445206   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:57.445211   51251 cri.go:89] found id: ""
	I1018 17:45:57.445219   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:45:57.445281   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:57.449004   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:57.452413   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:45:57.452492   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:45:57.477864   51251 cri.go:89] found id: ""
	I1018 17:45:57.477888   51251 logs.go:282] 0 containers: []
	W1018 17:45:57.477896   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:45:57.477903   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:45:57.477962   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:45:57.504898   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:57.504920   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:57.504931   51251 cri.go:89] found id: ""
	I1018 17:45:57.504977   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:45:57.505034   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:57.509061   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:57.513614   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:45:57.513685   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:45:57.544310   51251 cri.go:89] found id: ""
	I1018 17:45:57.544332   51251 logs.go:282] 0 containers: []
	W1018 17:45:57.544340   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:45:57.544346   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:45:57.544403   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:45:57.571245   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:57.571266   51251 cri.go:89] found id: ""
	I1018 17:45:57.571274   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:45:57.571331   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:45:57.575106   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:45:57.575176   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:45:57.606111   51251 cri.go:89] found id: ""
	I1018 17:45:57.606144   51251 logs.go:282] 0 containers: []
	W1018 17:45:57.606154   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:45:57.606162   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:45:57.606175   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:45:57.634184   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:45:57.634212   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:45:57.700157   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:45:57.700193   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:45:57.740730   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:45:57.740759   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:45:57.767473   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:45:57.767501   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:45:57.792761   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:45:57.792788   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:45:57.872610   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:45:57.872686   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:45:57.970465   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:45:57.970503   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:45:57.983943   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:45:57.983969   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:45:58.065431   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:45:58.056364    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.057407    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.058182    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.059825    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.060434    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:45:58.056364    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.057407    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.058182    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.059825    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:45:58.060434    9964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:45:58.065498   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:45:58.065512   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:45:58.140361   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:45:58.140407   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:00.709339   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:00.720914   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:00.721109   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:00.749016   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:00.749036   51251 cri.go:89] found id: ""
	I1018 17:46:00.749043   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:00.749098   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:00.752785   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:00.752913   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:00.780089   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:00.780157   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:00.780174   51251 cri.go:89] found id: ""
	I1018 17:46:00.780195   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:00.780277   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:00.784027   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:00.787918   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:00.787984   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:00.815886   51251 cri.go:89] found id: ""
	I1018 17:46:00.815911   51251 logs.go:282] 0 containers: []
	W1018 17:46:00.815920   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:00.815927   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:00.815984   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:00.843641   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:00.843672   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:00.843677   51251 cri.go:89] found id: ""
	I1018 17:46:00.843690   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:00.843749   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:00.857213   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:00.861599   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:00.861750   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:00.895883   51251 cri.go:89] found id: ""
	I1018 17:46:00.895957   51251 logs.go:282] 0 containers: []
	W1018 17:46:00.895981   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:00.896000   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:00.896070   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:00.925992   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:00.926061   51251 cri.go:89] found id: ""
	I1018 17:46:00.926086   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:00.926167   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:00.930024   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:00.930108   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:00.958457   51251 cri.go:89] found id: ""
	I1018 17:46:00.958482   51251 logs.go:282] 0 containers: []
	W1018 17:46:00.958490   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:00.958499   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:00.958511   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:01.035152   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:01.035187   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:01.069631   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:01.069662   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:01.099442   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:01.099466   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:01.185919   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:01.185957   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:01.233776   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:01.233801   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:01.247414   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:01.247442   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:01.275612   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:01.275640   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:01.332794   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:01.332829   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:01.367809   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:01.367840   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:01.464892   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:01.464929   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:01.535577   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:01.527773   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.528316   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.530190   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.530564   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.531863   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:01.527773   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.528316   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.530190   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.530564   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:01.531863   10121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:04.037058   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:04.047958   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:04.048043   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:04.080745   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:04.080770   51251 cri.go:89] found id: ""
	I1018 17:46:04.080779   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:04.080837   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:04.084749   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:04.084819   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:04.113194   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:04.113268   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:04.113275   51251 cri.go:89] found id: ""
	I1018 17:46:04.113283   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:04.113374   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:04.117058   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:04.121021   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:04.121088   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:04.150209   51251 cri.go:89] found id: ""
	I1018 17:46:04.150233   51251 logs.go:282] 0 containers: []
	W1018 17:46:04.150242   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:04.150248   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:04.150308   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:04.182648   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:04.182719   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:04.182732   51251 cri.go:89] found id: ""
	I1018 17:46:04.182740   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:04.182811   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:04.187068   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:04.191187   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:04.191265   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:04.226123   51251 cri.go:89] found id: ""
	I1018 17:46:04.226147   51251 logs.go:282] 0 containers: []
	W1018 17:46:04.226158   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:04.226165   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:04.226226   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:04.252111   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:04.252132   51251 cri.go:89] found id: ""
	I1018 17:46:04.252141   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:04.252196   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:04.255953   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:04.256026   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:04.287389   51251 cri.go:89] found id: ""
	I1018 17:46:04.287415   51251 logs.go:282] 0 containers: []
	W1018 17:46:04.287423   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:04.287432   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:04.287443   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:04.321947   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:04.321973   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:04.430342   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:04.430376   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:04.442744   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:04.442769   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:04.506948   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:04.498862   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.499448   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.501006   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.501596   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.503108   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:04.498862   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.499448   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.501006   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.501596   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:04.503108   10206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:04.507014   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:04.507043   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:04.543328   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:04.543361   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:04.572765   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:04.572798   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:04.602775   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:04.602801   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:04.658777   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:04.658812   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:04.732490   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:04.732537   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:04.759977   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:04.760005   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:07.339053   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:07.349656   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:07.349760   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:07.379978   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:07.380001   51251 cri.go:89] found id: ""
	I1018 17:46:07.380011   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:07.380093   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:07.383927   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:07.384018   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:07.409769   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:07.409800   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:07.409806   51251 cri.go:89] found id: ""
	I1018 17:46:07.409814   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:07.409902   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:07.413658   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:07.416960   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:07.417067   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:07.442892   51251 cri.go:89] found id: ""
	I1018 17:46:07.442916   51251 logs.go:282] 0 containers: []
	W1018 17:46:07.442924   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:07.442930   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:07.442989   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:07.469419   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:07.469440   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:07.469445   51251 cri.go:89] found id: ""
	I1018 17:46:07.469452   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:07.469508   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:07.473607   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:07.477386   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:07.477501   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:07.504080   51251 cri.go:89] found id: ""
	I1018 17:46:07.504105   51251 logs.go:282] 0 containers: []
	W1018 17:46:07.504116   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:07.504122   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:07.504231   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:07.531758   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:07.531781   51251 cri.go:89] found id: ""
	I1018 17:46:07.531790   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:07.531870   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:07.535733   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:07.535830   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:07.564437   51251 cri.go:89] found id: ""
	I1018 17:46:07.564463   51251 logs.go:282] 0 containers: []
	W1018 17:46:07.564471   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:07.564480   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:07.564524   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:07.628243   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:07.628278   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:07.662025   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:07.662052   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:07.764863   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:07.764897   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:07.776837   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:07.776865   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:07.847586   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:07.839604   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.840186   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.841835   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.842344   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.843875   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:07.839604   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.840186   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.841835   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.842344   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:07.843875   10352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:07.847606   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:07.847622   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:07.880085   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:07.880117   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:07.963636   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:07.963671   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:07.994194   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:07.994222   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:08.025564   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:08.025595   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:08.108415   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:08.108451   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:10.642798   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:10.653476   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:10.653548   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:10.679376   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:10.679398   51251 cri.go:89] found id: ""
	I1018 17:46:10.679407   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:10.679465   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:10.683355   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:10.683427   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:10.710429   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:10.710450   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:10.710454   51251 cri.go:89] found id: ""
	I1018 17:46:10.710461   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:10.710513   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:10.714130   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:10.717443   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:10.717506   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:10.744042   51251 cri.go:89] found id: ""
	I1018 17:46:10.744064   51251 logs.go:282] 0 containers: []
	W1018 17:46:10.744071   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:10.744078   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:10.744132   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:10.773166   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:10.773191   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:10.773196   51251 cri.go:89] found id: ""
	I1018 17:46:10.773203   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:10.773282   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:10.777442   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:10.781226   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:10.781299   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:10.808886   51251 cri.go:89] found id: ""
	I1018 17:46:10.808909   51251 logs.go:282] 0 containers: []
	W1018 17:46:10.808917   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:10.808924   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:10.809009   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:10.836634   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:10.836656   51251 cri.go:89] found id: ""
	I1018 17:46:10.836664   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:10.836720   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:10.840695   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:10.840772   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:10.869735   51251 cri.go:89] found id: ""
	I1018 17:46:10.869799   51251 logs.go:282] 0 containers: []
	W1018 17:46:10.869812   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:10.869822   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:10.869833   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:10.949626   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:10.949665   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:11.057346   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:11.057383   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:11.139105   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:11.139141   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:11.170764   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:11.170861   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:11.214148   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:11.214173   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:11.245381   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:11.245409   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:11.258609   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:11.258636   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:11.329040   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:11.320826   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.321453   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.322971   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.323467   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.325006   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:11.320826   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.321453   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.322971   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.323467   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:11.325006   10509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:11.329060   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:11.329072   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:11.354686   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:11.354710   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:11.393844   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:11.393872   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:13.965067   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:13.977065   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:13.977139   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:14.006565   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:14.006590   51251 cri.go:89] found id: ""
	I1018 17:46:14.006600   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:14.006694   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:14.011312   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:14.011387   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:14.040339   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:14.040367   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:14.040372   51251 cri.go:89] found id: ""
	I1018 17:46:14.040380   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:14.040437   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:14.044065   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:14.047760   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:14.047831   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:14.074918   51251 cri.go:89] found id: ""
	I1018 17:46:14.074943   51251 logs.go:282] 0 containers: []
	W1018 17:46:14.074952   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:14.074960   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:14.075023   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:14.107504   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:14.107526   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:14.107531   51251 cri.go:89] found id: ""
	I1018 17:46:14.107539   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:14.107591   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:14.111227   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:14.114719   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:14.114811   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:14.145967   51251 cri.go:89] found id: ""
	I1018 17:46:14.146042   51251 logs.go:282] 0 containers: []
	W1018 17:46:14.146062   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:14.146082   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:14.146164   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:14.186824   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:14.186888   51251 cri.go:89] found id: ""
	I1018 17:46:14.186910   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:14.186990   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:14.190545   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:14.190628   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:14.226876   51251 cri.go:89] found id: ""
	I1018 17:46:14.226971   51251 logs.go:282] 0 containers: []
	W1018 17:46:14.226994   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:14.227020   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:14.227045   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:14.329164   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:14.329201   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:14.397274   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:14.389270   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.390097   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.391638   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.392076   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.393694   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:14.389270   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.390097   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.391638   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.392076   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:14.393694   10604 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:14.397296   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:14.397309   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:14.426769   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:14.426796   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:14.486615   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:14.486650   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:14.559349   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:14.559386   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:14.587426   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:14.587455   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:14.664068   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:14.664104   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:14.675861   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:14.675886   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:14.708879   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:14.708911   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:14.736861   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:14.736890   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:17.281896   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:17.292988   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:17.293081   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:17.321611   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:17.321634   51251 cri.go:89] found id: ""
	I1018 17:46:17.321642   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:17.321697   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:17.325317   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:17.325398   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:17.352512   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:17.352534   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:17.352538   51251 cri.go:89] found id: ""
	I1018 17:46:17.352546   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:17.352599   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:17.357098   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:17.360560   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:17.360677   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:17.390732   51251 cri.go:89] found id: ""
	I1018 17:46:17.390762   51251 logs.go:282] 0 containers: []
	W1018 17:46:17.390770   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:17.390778   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:17.390842   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:17.419824   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:17.419846   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:17.419851   51251 cri.go:89] found id: ""
	I1018 17:46:17.419858   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:17.419916   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:17.423710   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:17.427116   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:17.427185   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:17.453579   51251 cri.go:89] found id: ""
	I1018 17:46:17.453602   51251 logs.go:282] 0 containers: []
	W1018 17:46:17.453610   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:17.453617   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:17.453705   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:17.486285   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:17.486309   51251 cri.go:89] found id: ""
	I1018 17:46:17.486318   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:17.486372   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:17.490015   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:17.490104   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:17.518259   51251 cri.go:89] found id: ""
	I1018 17:46:17.518284   51251 logs.go:282] 0 containers: []
	W1018 17:46:17.518292   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:17.518301   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:17.518332   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:17.614000   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:17.614035   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:17.626518   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:17.626553   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:17.684157   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:17.684191   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:17.730343   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:17.730369   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:17.798308   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:17.789990   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.790724   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.792367   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.792674   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.794211   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:17.789990   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.790724   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.792367   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.792674   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:17.794211   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:17.798326   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:17.798338   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:17.823833   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:17.823857   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:17.865773   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:17.865799   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:17.935865   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:17.935900   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:17.978061   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:17.978088   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:18.006175   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:18.006205   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:20.594229   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:20.605152   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:20.605223   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:20.633212   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:20.633234   51251 cri.go:89] found id: ""
	I1018 17:46:20.633243   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:20.633310   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:20.637046   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:20.637118   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:20.663217   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:20.663238   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:20.663246   51251 cri.go:89] found id: ""
	I1018 17:46:20.663253   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:20.663325   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:20.667226   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:20.670621   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:20.670719   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:20.698213   51251 cri.go:89] found id: ""
	I1018 17:46:20.698235   51251 logs.go:282] 0 containers: []
	W1018 17:46:20.698244   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:20.698287   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:20.698367   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:20.730404   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:20.730434   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:20.730439   51251 cri.go:89] found id: ""
	I1018 17:46:20.730447   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:20.730519   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:20.734442   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:20.738131   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:20.738222   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:20.773079   51251 cri.go:89] found id: ""
	I1018 17:46:20.773149   51251 logs.go:282] 0 containers: []
	W1018 17:46:20.773171   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:20.773193   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:20.773277   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:20.800462   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:20.800534   51251 cri.go:89] found id: ""
	I1018 17:46:20.800569   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:20.800664   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:20.805115   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:20.805213   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:20.830418   51251 cri.go:89] found id: ""
	I1018 17:46:20.830442   51251 logs.go:282] 0 containers: []
	W1018 17:46:20.830451   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:20.830459   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:20.830470   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:20.912043   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:20.912075   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:20.938545   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:20.938572   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:20.977936   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:20.978010   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:21.013920   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:21.013950   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:21.119416   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:21.119450   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:21.132924   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:21.133048   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:21.220628   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:21.211038   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.212205   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.213238   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.213888   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.215798   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:21.211038   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.212205   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.213238   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.213888   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:21.215798   10922 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:21.220657   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:21.220677   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:21.249593   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:21.249618   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:21.329125   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:21.329162   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:21.387066   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:21.387097   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:23.926900   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:23.937764   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:23.937832   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:23.976069   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:23.976129   51251 cri.go:89] found id: ""
	I1018 17:46:23.976159   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:23.976235   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:23.979863   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:23.979943   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:24.009930   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:24.009950   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:24.009954   51251 cri.go:89] found id: ""
	I1018 17:46:24.009963   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:24.010025   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:24.014274   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:24.018246   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:24.018317   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:24.046546   51251 cri.go:89] found id: ""
	I1018 17:46:24.046571   51251 logs.go:282] 0 containers: []
	W1018 17:46:24.046589   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:24.046596   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:24.046659   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:24.073391   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:24.073411   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:24.073416   51251 cri.go:89] found id: ""
	I1018 17:46:24.073428   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:24.073485   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:24.077447   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:24.081009   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:24.081083   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:24.108804   51251 cri.go:89] found id: ""
	I1018 17:46:24.108828   51251 logs.go:282] 0 containers: []
	W1018 17:46:24.108837   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:24.108843   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:24.108905   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:24.144321   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:24.144348   51251 cri.go:89] found id: ""
	I1018 17:46:24.144357   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:24.144413   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:24.148488   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:24.148592   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:24.176586   51251 cri.go:89] found id: ""
	I1018 17:46:24.176611   51251 logs.go:282] 0 containers: []
	W1018 17:46:24.176619   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:24.176629   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:24.176640   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:24.254257   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:24.245066   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.246406   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.248217   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.248923   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.250447   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:24.245066   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.246406   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.248217   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.248923   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:24.250447   11017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:24.254278   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:24.254290   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:24.281646   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:24.281673   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:24.354939   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:24.354974   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:24.383116   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:24.383140   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:24.462892   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:24.462927   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:24.504197   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:24.504228   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:24.562928   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:24.562961   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:24.599399   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:24.599433   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:24.631679   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:24.631746   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:24.732308   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:24.732344   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:27.244674   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:27.255895   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:27.256012   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:27.287040   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:27.287060   51251 cri.go:89] found id: ""
	I1018 17:46:27.287069   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:27.287149   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:27.290894   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:27.290963   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:27.320255   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:27.320275   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:27.320280   51251 cri.go:89] found id: ""
	I1018 17:46:27.320287   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:27.320342   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:27.323980   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:27.327547   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:27.327617   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:27.352735   51251 cri.go:89] found id: ""
	I1018 17:46:27.352759   51251 logs.go:282] 0 containers: []
	W1018 17:46:27.352768   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:27.352774   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:27.352857   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:27.379505   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:27.379527   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:27.379532   51251 cri.go:89] found id: ""
	I1018 17:46:27.379539   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:27.379595   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:27.383294   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:27.386911   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:27.386986   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:27.415912   51251 cri.go:89] found id: ""
	I1018 17:46:27.415934   51251 logs.go:282] 0 containers: []
	W1018 17:46:27.415943   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:27.415949   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:27.416005   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:27.445650   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:27.445672   51251 cri.go:89] found id: ""
	I1018 17:46:27.445682   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:27.445741   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:27.449604   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:27.449704   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:27.484794   51251 cri.go:89] found id: ""
	I1018 17:46:27.484859   51251 logs.go:282] 0 containers: []
	W1018 17:46:27.484882   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:27.484904   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:27.484958   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:27.584293   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:27.584332   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:27.648407   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:27.648440   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:27.676738   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:27.676766   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:27.689349   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:27.689383   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:27.762040   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:27.753582   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.754358   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.756209   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.756792   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.758400   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:27.753582   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.754358   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.756209   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.756792   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:27.758400   11172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:27.762060   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:27.762074   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:27.788162   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:27.788190   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:27.822151   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:27.822180   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:27.891958   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:27.891993   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:27.920389   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:27.920413   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:28.000828   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:28.000902   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:30.539090   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:30.549624   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:30.549693   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:30.576191   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:30.576210   51251 cri.go:89] found id: ""
	I1018 17:46:30.576218   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:30.576270   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:30.580032   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:30.580143   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:30.605554   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:30.605576   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:30.605582   51251 cri.go:89] found id: ""
	I1018 17:46:30.605600   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:30.605693   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:30.609432   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:30.613226   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:30.613297   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:30.640206   51251 cri.go:89] found id: ""
	I1018 17:46:30.640232   51251 logs.go:282] 0 containers: []
	W1018 17:46:30.640241   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:30.640248   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:30.640305   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:30.667995   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:30.668022   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:30.668027   51251 cri.go:89] found id: ""
	I1018 17:46:30.668035   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:30.668090   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:30.671800   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:30.675538   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:30.675607   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:30.700530   51251 cri.go:89] found id: ""
	I1018 17:46:30.700554   51251 logs.go:282] 0 containers: []
	W1018 17:46:30.700562   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:30.700568   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:30.700623   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:30.728589   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:30.728610   51251 cri.go:89] found id: ""
	I1018 17:46:30.728618   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:30.728673   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:30.732322   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:30.732414   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:30.757553   51251 cri.go:89] found id: ""
	I1018 17:46:30.757577   51251 logs.go:282] 0 containers: []
	W1018 17:46:30.757586   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:30.757594   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:30.757635   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:30.823888   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:30.816309   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.816862   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.818339   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.818806   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.820240   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:30.816309   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.816862   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.818339   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.818806   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:30.820240   11289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:30.823908   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:30.823921   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:30.849213   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:30.849239   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:30.906353   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:30.906387   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:30.995137   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:30.995173   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:31.081727   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:31.081761   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:31.125969   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:31.125994   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:31.232441   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:31.232474   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:31.244403   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:31.244430   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:31.288661   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:31.288704   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:31.322411   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:31.322439   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:33.853119   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:33.864167   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:33.864236   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:33.897397   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:33.897420   51251 cri.go:89] found id: ""
	I1018 17:46:33.897428   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:33.897485   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:33.901240   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:33.901310   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:33.929613   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:33.929646   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:33.929651   51251 cri.go:89] found id: ""
	I1018 17:46:33.929658   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:33.929735   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:33.933312   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:33.936856   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:33.936964   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:33.977530   51251 cri.go:89] found id: ""
	I1018 17:46:33.977558   51251 logs.go:282] 0 containers: []
	W1018 17:46:33.977566   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:33.977573   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:33.977631   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:34.012562   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:34.012584   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:34.012589   51251 cri.go:89] found id: ""
	I1018 17:46:34.012596   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:34.012656   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:34.016474   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:34.020781   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:34.020852   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:34.046987   51251 cri.go:89] found id: ""
	I1018 17:46:34.047014   51251 logs.go:282] 0 containers: []
	W1018 17:46:34.047022   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:34.047029   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:34.047086   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:34.076543   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:34.076564   51251 cri.go:89] found id: ""
	I1018 17:46:34.076575   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:34.076631   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:34.080378   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:34.080449   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:34.107694   51251 cri.go:89] found id: ""
	I1018 17:46:34.107716   51251 logs.go:282] 0 containers: []
	W1018 17:46:34.107724   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:34.107734   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:34.107745   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:34.119659   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:34.119686   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:34.177728   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:34.177831   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:34.238468   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:34.238509   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:34.321582   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:34.321620   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:34.353750   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:34.353776   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:34.384525   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:34.384552   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:34.462817   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:34.462849   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:34.494982   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:34.495010   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:34.598168   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:34.598203   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:34.675787   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:34.666968   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.667733   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.669584   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.670213   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.671781   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:34.666968   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.667733   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.669584   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.670213   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:34.671781   11482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:34.675809   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:34.675822   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:37.204073   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:37.217257   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:37.217324   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:37.242870   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:37.242892   51251 cri.go:89] found id: ""
	I1018 17:46:37.242900   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:37.242956   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:37.246583   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:37.246652   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:37.272095   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:37.272157   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:37.272174   51251 cri.go:89] found id: ""
	I1018 17:46:37.272195   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:37.272279   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:37.276536   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:37.280121   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:37.280190   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:37.305151   51251 cri.go:89] found id: ""
	I1018 17:46:37.305173   51251 logs.go:282] 0 containers: []
	W1018 17:46:37.305182   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:37.305188   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:37.305244   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:37.338068   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:37.338137   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:37.338155   51251 cri.go:89] found id: ""
	I1018 17:46:37.338191   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:37.338263   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:37.342725   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:37.346547   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:37.346621   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:37.374074   51251 cri.go:89] found id: ""
	I1018 17:46:37.374095   51251 logs.go:282] 0 containers: []
	W1018 17:46:37.374104   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:37.374110   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:37.374167   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:37.405324   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:37.405346   51251 cri.go:89] found id: ""
	I1018 17:46:37.405360   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:37.405434   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:37.409814   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:37.409899   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:37.435527   51251 cri.go:89] found id: ""
	I1018 17:46:37.435551   51251 logs.go:282] 0 containers: []
	W1018 17:46:37.435560   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:37.435568   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:37.435579   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:37.504448   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:37.496518   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.497134   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.498616   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.499058   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.500376   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:37.496518   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.497134   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.498616   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.499058   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:37.500376   11559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:37.504468   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:37.504482   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:37.533375   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:37.533403   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:37.598625   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:37.598661   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:37.634535   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:37.634563   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:37.717277   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:37.717311   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:37.818978   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:37.819016   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:37.832055   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:37.832084   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:37.904377   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:37.904408   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:37.938939   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:37.938966   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:37.981000   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:37.981027   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:40.513454   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:40.524358   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:40.524437   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:40.552377   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:40.552454   51251 cri.go:89] found id: ""
	I1018 17:46:40.552475   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:40.552563   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:40.556445   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:40.556565   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:40.582695   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:40.582726   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:40.582732   51251 cri.go:89] found id: ""
	I1018 17:46:40.582739   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:40.582814   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:40.586779   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:40.590379   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:40.590449   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:40.618010   51251 cri.go:89] found id: ""
	I1018 17:46:40.618034   51251 logs.go:282] 0 containers: []
	W1018 17:46:40.618050   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:40.618056   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:40.618113   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:40.648753   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:40.648776   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:40.648782   51251 cri.go:89] found id: ""
	I1018 17:46:40.648790   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:40.648848   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:40.652681   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:40.656399   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:40.656475   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:40.682133   51251 cri.go:89] found id: ""
	I1018 17:46:40.682157   51251 logs.go:282] 0 containers: []
	W1018 17:46:40.682165   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:40.682180   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:40.682236   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:40.709218   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:40.709242   51251 cri.go:89] found id: ""
	I1018 17:46:40.709250   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:40.709309   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:40.713679   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:40.713762   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:40.739858   51251 cri.go:89] found id: ""
	I1018 17:46:40.739881   51251 logs.go:282] 0 containers: []
	W1018 17:46:40.739889   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:40.739899   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:40.739910   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:40.767013   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:40.767039   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:40.815169   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:40.815198   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:40.828097   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:40.828174   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:40.854852   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:40.854880   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:40.928587   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:40.928623   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:40.967185   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:40.967264   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:41.043445   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:41.043480   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:41.073682   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:41.073706   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:41.167926   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:41.167960   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:41.279975   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:41.280011   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:41.354826   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:41.337935   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.339488   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.340251   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.347202   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.347805   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:41.337935   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.339488   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.340251   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.347202   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:41.347805   11766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:43.856192   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:43.867961   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:43.868072   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:43.894221   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:43.894243   51251 cri.go:89] found id: ""
	I1018 17:46:43.894252   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:43.894332   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:43.898170   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:43.898263   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:43.925956   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:43.926031   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:43.926050   51251 cri.go:89] found id: ""
	I1018 17:46:43.926070   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:43.926142   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:43.929746   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:43.933185   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:43.933255   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:43.959602   51251 cri.go:89] found id: ""
	I1018 17:46:43.959627   51251 logs.go:282] 0 containers: []
	W1018 17:46:43.959635   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:43.959647   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:43.959704   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:43.991256   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:43.991325   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:43.991354   51251 cri.go:89] found id: ""
	I1018 17:46:43.991375   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:43.991457   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:43.995372   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:43.999083   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:43.999191   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:44.027597   51251 cri.go:89] found id: ""
	I1018 17:46:44.027632   51251 logs.go:282] 0 containers: []
	W1018 17:46:44.027641   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:44.027647   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:44.027715   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:44.055061   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:44.055085   51251 cri.go:89] found id: ""
	I1018 17:46:44.055094   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:44.055163   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:44.059234   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:44.059339   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:44.087631   51251 cri.go:89] found id: ""
	I1018 17:46:44.087653   51251 logs.go:282] 0 containers: []
	W1018 17:46:44.087661   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:44.087670   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:44.087681   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:44.189442   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:44.189477   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:44.218935   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:44.218961   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:44.286708   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:44.286746   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:44.321434   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:44.321463   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:44.399455   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:44.399492   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:44.434475   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:44.434502   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:44.448230   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:44.448256   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:44.523028   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:44.515201   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.515969   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.517455   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.517964   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.519503   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:44.515201   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.515969   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.517455   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.517964   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:44.519503   11882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:44.523047   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:44.523060   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:44.559772   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:44.559799   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:44.632864   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:44.632968   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:47.163147   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:47.174684   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:47.174753   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:47.212548   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:47.212575   51251 cri.go:89] found id: ""
	I1018 17:46:47.212583   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:47.212638   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:47.216970   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:47.217043   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:47.246472   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:47.246547   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:47.246565   51251 cri.go:89] found id: ""
	I1018 17:46:47.246585   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:47.246669   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:47.252448   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:47.255988   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:47.256113   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:47.287109   51251 cri.go:89] found id: ""
	I1018 17:46:47.287134   51251 logs.go:282] 0 containers: []
	W1018 17:46:47.287144   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:47.287150   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:47.287211   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:47.316914   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:47.316964   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:47.316969   51251 cri.go:89] found id: ""
	I1018 17:46:47.316977   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:47.317032   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:47.320849   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:47.324385   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:47.324455   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:47.351869   51251 cri.go:89] found id: ""
	I1018 17:46:47.351894   51251 logs.go:282] 0 containers: []
	W1018 17:46:47.351902   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:47.351908   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:47.351963   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:47.378692   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:47.378712   51251 cri.go:89] found id: ""
	I1018 17:46:47.378720   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:47.378773   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:47.382267   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:47.382341   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:47.417848   51251 cri.go:89] found id: ""
	I1018 17:46:47.417914   51251 logs.go:282] 0 containers: []
	W1018 17:46:47.417928   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:47.417938   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:47.417953   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:47.515489   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:47.515527   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:47.598137   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:47.585088   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.586210   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.586811   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.592142   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.592951   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:47.585088   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.586210   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.586811   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.592142   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:47.592951   11978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:47.598159   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:47.598172   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:47.627147   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:47.627171   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:47.685715   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:47.685749   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:47.729509   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:47.729542   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:47.802620   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:47.802658   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:47.841366   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:47.841393   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:47.853500   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:47.853528   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:47.882085   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:47.882112   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:47.962102   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:47.962182   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:50.497378   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:50.509438   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:50.509515   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:50.536827   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:50.536845   51251 cri.go:89] found id: ""
	I1018 17:46:50.536853   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:50.536906   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:50.540656   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:50.540736   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:50.572295   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:50.572315   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:50.572319   51251 cri.go:89] found id: ""
	I1018 17:46:50.572326   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:50.572381   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:50.576114   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:50.579678   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:50.579767   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:50.604801   51251 cri.go:89] found id: ""
	I1018 17:46:50.604883   51251 logs.go:282] 0 containers: []
	W1018 17:46:50.604907   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:50.604953   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:50.605039   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:50.630628   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:50.630689   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:50.630709   51251 cri.go:89] found id: ""
	I1018 17:46:50.630731   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:50.630799   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:50.634652   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:50.638142   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:50.638211   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:50.668081   51251 cri.go:89] found id: ""
	I1018 17:46:50.668158   51251 logs.go:282] 0 containers: []
	W1018 17:46:50.668178   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:50.668199   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:50.668286   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:50.695569   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:50.695633   51251 cri.go:89] found id: ""
	I1018 17:46:50.695655   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:50.695739   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:50.699470   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:50.699542   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:50.727412   51251 cri.go:89] found id: ""
	I1018 17:46:50.727436   51251 logs.go:282] 0 containers: []
	W1018 17:46:50.727445   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:50.727454   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:50.727467   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:50.753408   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:50.753435   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:50.827768   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:50.827848   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:50.859978   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:50.860003   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:50.939527   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:50.939561   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:50.980682   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:50.980711   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:51.076628   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:51.076663   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:51.090191   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:51.090220   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:51.182260   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:51.173917   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.174843   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.176369   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.176776   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.178414   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:51.173917   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.174843   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.176369   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.176776   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:51.178414   12158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:51.182283   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:51.182295   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:51.232720   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:51.232749   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:51.308144   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:51.308178   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:53.837977   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:53.848545   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:53.848614   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:53.876495   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:53.876519   51251 cri.go:89] found id: ""
	I1018 17:46:53.876528   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:53.876595   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:53.880322   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:53.880394   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:53.907168   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:53.907231   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:53.907249   51251 cri.go:89] found id: ""
	I1018 17:46:53.907272   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:53.907357   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:53.911597   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:53.914987   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:53.915059   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:53.940518   51251 cri.go:89] found id: ""
	I1018 17:46:53.940542   51251 logs.go:282] 0 containers: []
	W1018 17:46:53.940551   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:53.940557   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:53.940616   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:53.978433   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:53.978457   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:53.978462   51251 cri.go:89] found id: ""
	I1018 17:46:53.978469   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:53.978524   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:53.982381   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:53.985948   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:53.986022   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:54.015365   51251 cri.go:89] found id: ""
	I1018 17:46:54.015389   51251 logs.go:282] 0 containers: []
	W1018 17:46:54.015403   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:54.015410   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:54.015469   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:54.043566   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:54.043585   51251 cri.go:89] found id: ""
	I1018 17:46:54.043594   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:54.043652   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:54.047469   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:54.047537   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:54.074756   51251 cri.go:89] found id: ""
	I1018 17:46:54.074779   51251 logs.go:282] 0 containers: []
	W1018 17:46:54.074788   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:54.074797   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:54.074836   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:54.105299   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:54.105329   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:54.181466   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:54.181501   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:46:54.274419   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:54.274455   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:54.312879   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:54.312907   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:54.417669   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:54.417744   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:54.429755   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:54.429780   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:54.498834   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:54.489425   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.491045   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.492004   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.493115   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.494863   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:54.489425   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.491045   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.492004   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.493115   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:54.494863   12285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:54.498906   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:54.498927   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:54.527210   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:54.527238   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:54.569700   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:54.569732   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:54.644529   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:54.644561   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:57.172362   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:46:57.183486   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:46:57.183556   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:46:57.221818   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:57.221836   51251 cri.go:89] found id: ""
	I1018 17:46:57.221844   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:46:57.221899   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:57.225454   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:46:57.225520   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:46:57.252169   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:57.252192   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:57.252197   51251 cri.go:89] found id: ""
	I1018 17:46:57.252206   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:46:57.252263   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:57.256351   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:57.259722   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:46:57.259804   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:46:57.286504   51251 cri.go:89] found id: ""
	I1018 17:46:57.286527   51251 logs.go:282] 0 containers: []
	W1018 17:46:57.286536   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:46:57.286542   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:46:57.286603   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:46:57.314232   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:57.314254   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:57.314259   51251 cri.go:89] found id: ""
	I1018 17:46:57.314267   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:46:57.314322   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:57.317847   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:57.320999   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:46:57.321074   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:46:57.346974   51251 cri.go:89] found id: ""
	I1018 17:46:57.346999   51251 logs.go:282] 0 containers: []
	W1018 17:46:57.347008   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:46:57.347014   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:46:57.347069   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:46:57.373499   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:57.373567   51251 cri.go:89] found id: ""
	I1018 17:46:57.373587   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:46:57.373664   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:46:57.377584   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:46:57.377703   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:46:57.407749   51251 cri.go:89] found id: ""
	I1018 17:46:57.407773   51251 logs.go:282] 0 containers: []
	W1018 17:46:57.407782   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:46:57.407790   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:46:57.407801   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:46:57.420407   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:46:57.420432   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:46:57.450356   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:46:57.450384   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:46:57.487363   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:46:57.487394   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:46:57.580373   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:46:57.580410   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:46:57.617494   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:46:57.617524   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:46:57.719190   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:46:57.719227   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:46:57.790068   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:46:57.780054   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.780444   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.782856   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.783240   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.785433   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:46:57.780054   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.780444   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.782856   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.783240   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:46:57.785433   12431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:46:57.790090   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:46:57.790104   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:46:57.849803   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:46:57.849835   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:46:57.881569   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:46:57.881600   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:46:57.911940   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:46:57.911966   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:00.495334   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:00.507616   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:00.507694   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:00.539238   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:00.539258   51251 cri.go:89] found id: ""
	I1018 17:47:00.539266   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:00.539323   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:00.543503   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:00.543571   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:00.574079   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:00.574112   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:00.574118   51251 cri.go:89] found id: ""
	I1018 17:47:00.574126   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:00.574199   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:00.578461   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:00.582394   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:00.582473   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:00.609898   51251 cri.go:89] found id: ""
	I1018 17:47:00.609973   51251 logs.go:282] 0 containers: []
	W1018 17:47:00.610004   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:00.610017   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:00.610086   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:00.637367   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:00.637388   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:00.637393   51251 cri.go:89] found id: ""
	I1018 17:47:00.637400   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:00.637464   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:00.641319   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:00.644789   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:00.644895   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:00.672435   51251 cri.go:89] found id: ""
	I1018 17:47:00.672467   51251 logs.go:282] 0 containers: []
	W1018 17:47:00.672476   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:00.672498   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:00.672580   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:00.699455   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:00.699483   51251 cri.go:89] found id: ""
	I1018 17:47:00.699492   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:00.699583   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:00.703264   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:00.703360   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:00.728880   51251 cri.go:89] found id: ""
	I1018 17:47:00.728902   51251 logs.go:282] 0 containers: []
	W1018 17:47:00.728909   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:00.728919   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:00.728930   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:00.823491   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:00.823527   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:00.902015   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:00.902048   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:00.934461   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:00.934491   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:00.946667   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:00.946693   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:01.028399   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:01.020279   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.020921   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.022494   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.023037   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.024610   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:01.020279   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.020921   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.022494   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.023037   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:01.024610   12546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:01.028462   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:01.028491   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:01.054806   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:01.054833   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:01.113787   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:01.113863   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:01.158354   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:01.158386   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:01.240342   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:01.240377   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:01.271277   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:01.271308   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:03.801529   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:03.812492   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:03.812565   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:03.840023   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:03.840046   51251 cri.go:89] found id: ""
	I1018 17:47:03.840054   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:03.840107   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:03.844123   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:03.844199   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:03.871286   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:03.871312   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:03.871317   51251 cri.go:89] found id: ""
	I1018 17:47:03.871325   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:03.871393   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:03.875415   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:03.879340   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:03.879454   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:03.907561   51251 cri.go:89] found id: ""
	I1018 17:47:03.907586   51251 logs.go:282] 0 containers: []
	W1018 17:47:03.907595   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:03.907602   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:03.907685   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:03.933344   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:03.933418   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:03.933445   51251 cri.go:89] found id: ""
	I1018 17:47:03.933467   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:03.933532   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:03.937202   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:03.940624   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:03.940692   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:03.976333   51251 cri.go:89] found id: ""
	I1018 17:47:03.976360   51251 logs.go:282] 0 containers: []
	W1018 17:47:03.976369   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:03.976375   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:03.976431   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:04.003969   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:04.003993   51251 cri.go:89] found id: ""
	I1018 17:47:04.004002   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:04.004073   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:04.008851   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:04.008931   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:04.043815   51251 cri.go:89] found id: ""
	I1018 17:47:04.043837   51251 logs.go:282] 0 containers: []
	W1018 17:47:04.043845   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:04.043854   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:04.043866   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:04.103935   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:04.103972   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:04.197102   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:04.197140   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:04.232873   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:04.232903   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:04.308823   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:04.308859   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:04.340563   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:04.340591   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:04.411725   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:04.402979   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.403733   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.405382   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.405957   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.407619   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:04.402979   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.403733   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.405382   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.405957   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:04.407619   12699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:04.411746   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:04.411758   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:04.436986   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:04.437017   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:04.474563   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:04.474599   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:04.508182   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:04.508207   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:04.612203   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:04.612245   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:07.124391   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:07.136931   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:07.137030   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:07.162931   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:07.162951   51251 cri.go:89] found id: ""
	I1018 17:47:07.162960   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:07.163014   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:07.166802   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:07.166873   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:07.194647   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:07.194666   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:07.194671   51251 cri.go:89] found id: ""
	I1018 17:47:07.194679   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:07.194732   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:07.198306   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:07.202321   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:07.202393   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:07.236779   51251 cri.go:89] found id: ""
	I1018 17:47:07.236804   51251 logs.go:282] 0 containers: []
	W1018 17:47:07.236813   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:07.236819   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:07.236876   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:07.266781   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:07.266801   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:07.266806   51251 cri.go:89] found id: ""
	I1018 17:47:07.266813   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:07.266867   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:07.270559   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:07.275186   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:07.275286   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:07.304386   51251 cri.go:89] found id: ""
	I1018 17:47:07.304423   51251 logs.go:282] 0 containers: []
	W1018 17:47:07.304454   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:07.304462   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:07.304540   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:07.333196   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:07.333220   51251 cri.go:89] found id: ""
	I1018 17:47:07.333228   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:07.333322   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:07.338348   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:07.338462   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:07.366271   51251 cri.go:89] found id: ""
	I1018 17:47:07.366343   51251 logs.go:282] 0 containers: []
	W1018 17:47:07.366364   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:07.366379   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:07.366391   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:07.468507   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:07.468585   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:07.529687   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:07.529725   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:07.565649   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:07.565779   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:07.596211   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:07.596237   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:07.615230   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:07.615299   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:07.692829   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:07.685395   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.685775   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.687235   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.687549   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.689030   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:07.685395   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.685775   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.687235   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.687549   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:07.689030   12829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:07.692899   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:07.692930   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:07.718952   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:07.719025   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:07.795561   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:07.795598   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:07.824250   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:07.824280   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:07.906836   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:07.906868   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:10.439981   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:10.451479   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:10.451545   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:10.480101   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:10.480123   51251 cri.go:89] found id: ""
	I1018 17:47:10.480132   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:10.480190   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:10.483904   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:10.484019   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:10.514873   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:10.514897   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:10.514902   51251 cri.go:89] found id: ""
	I1018 17:47:10.514910   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:10.514966   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:10.518574   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:10.522267   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:10.522379   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:10.550236   51251 cri.go:89] found id: ""
	I1018 17:47:10.550300   51251 logs.go:282] 0 containers: []
	W1018 17:47:10.550324   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:10.550343   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:10.550419   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:10.576542   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:10.576564   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:10.576569   51251 cri.go:89] found id: ""
	I1018 17:47:10.576576   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:10.576631   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:10.580343   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:10.583810   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:10.583876   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:10.608923   51251 cri.go:89] found id: ""
	I1018 17:47:10.608997   51251 logs.go:282] 0 containers: []
	W1018 17:47:10.609009   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:10.609016   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:10.609083   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:10.640901   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:10.640997   51251 cri.go:89] found id: ""
	I1018 17:47:10.641019   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:10.641104   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:10.644777   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:10.644898   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:10.686801   51251 cri.go:89] found id: ""
	I1018 17:47:10.686867   51251 logs.go:282] 0 containers: []
	W1018 17:47:10.686888   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:10.686902   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:10.686913   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:10.790476   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:10.790513   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:10.866774   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:10.866808   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:10.896066   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:10.896092   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:10.977137   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:10.977170   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:11.028633   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:11.028664   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:11.040841   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:11.040870   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:11.108732   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:11.100472   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.101171   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.102909   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.103502   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.105204   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:11.100472   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.101171   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.102909   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.103502   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:11.105204   12971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:11.108754   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:11.108767   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:11.142956   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:11.142982   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:11.203085   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:11.203120   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:11.245548   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:11.245582   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:13.780727   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:13.792098   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:13.792166   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:13.819543   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:13.819564   51251 cri.go:89] found id: ""
	I1018 17:47:13.819571   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:13.819627   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:13.823882   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:13.823951   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:13.849465   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:13.849495   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:13.849501   51251 cri.go:89] found id: ""
	I1018 17:47:13.849508   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:13.849563   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:13.853400   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:13.856833   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:13.856907   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:13.886459   51251 cri.go:89] found id: ""
	I1018 17:47:13.886482   51251 logs.go:282] 0 containers: []
	W1018 17:47:13.886502   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:13.886509   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:13.886576   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:13.914771   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:13.914840   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:13.914859   51251 cri.go:89] found id: ""
	I1018 17:47:13.914884   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:13.914961   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:13.919618   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:13.923284   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:13.923358   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:13.970811   51251 cri.go:89] found id: ""
	I1018 17:47:13.970833   51251 logs.go:282] 0 containers: []
	W1018 17:47:13.970841   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:13.970848   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:13.970905   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:13.997307   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:13.997333   51251 cri.go:89] found id: ""
	I1018 17:47:13.997341   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:13.997406   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:14.001258   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:14.001421   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:14.031834   51251 cri.go:89] found id: ""
	I1018 17:47:14.031908   51251 logs.go:282] 0 containers: []
	W1018 17:47:14.031930   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:14.031952   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:14.031991   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:14.115427   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:14.115472   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:14.155640   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:14.155675   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:14.260678   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:14.260712   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:14.299224   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:14.299256   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:14.328160   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:14.328189   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:14.402362   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:14.402396   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:14.436253   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:14.436279   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:14.448030   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:14.448054   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:14.523971   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:14.516092   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.516475   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.517978   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.518298   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.519757   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:14.516092   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.516475   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.517978   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.518298   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:14.519757   13126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:14.523992   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:14.524003   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:14.553496   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:14.553520   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:17.135556   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:17.147008   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:17.147074   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:17.173389   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:17.173409   51251 cri.go:89] found id: ""
	I1018 17:47:17.173417   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:17.173471   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:17.177579   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:17.177651   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:17.203627   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:17.203645   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:17.203650   51251 cri.go:89] found id: ""
	I1018 17:47:17.203657   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:17.203710   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:17.207344   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:17.217855   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:17.217930   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:17.249063   51251 cri.go:89] found id: ""
	I1018 17:47:17.249089   51251 logs.go:282] 0 containers: []
	W1018 17:47:17.249098   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:17.249105   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:17.249168   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:17.277163   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:17.277181   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:17.277186   51251 cri.go:89] found id: ""
	I1018 17:47:17.277193   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:17.277248   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:17.282612   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:17.286495   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:17.286569   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:17.319307   51251 cri.go:89] found id: ""
	I1018 17:47:17.319375   51251 logs.go:282] 0 containers: []
	W1018 17:47:17.319398   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:17.319410   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:17.319486   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:17.346484   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:17.346554   51251 cri.go:89] found id: ""
	I1018 17:47:17.346580   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:17.346657   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:17.350475   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:17.350550   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:17.377839   51251 cri.go:89] found id: ""
	I1018 17:47:17.377902   51251 logs.go:282] 0 containers: []
	W1018 17:47:17.377922   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:17.377931   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:17.377943   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:17.404392   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:17.404417   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:17.465336   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:17.465374   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:17.544540   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:17.544575   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:17.578410   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:17.578440   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:17.622849   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:17.622874   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:17.651286   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:17.651315   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:17.729896   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:17.729933   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:17.762097   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:17.762131   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:17.860291   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:17.860324   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:17.873306   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:17.873333   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:17.956831   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:17.948399   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.948817   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.950652   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.951205   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.953012   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:17.948399   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.948817   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.950652   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.951205   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:17.953012   13279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:20.457766   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:20.468306   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:20.468375   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:20.502498   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:20.502519   51251 cri.go:89] found id: ""
	I1018 17:47:20.502527   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:20.502581   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:20.506455   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:20.506526   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:20.533813   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:20.533831   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:20.533836   51251 cri.go:89] found id: ""
	I1018 17:47:20.533844   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:20.533897   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:20.537754   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:20.541481   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:20.541549   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:20.567040   51251 cri.go:89] found id: ""
	I1018 17:47:20.567063   51251 logs.go:282] 0 containers: []
	W1018 17:47:20.567071   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:20.567078   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:20.567139   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:20.596640   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:20.596661   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:20.596666   51251 cri.go:89] found id: ""
	I1018 17:47:20.596674   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:20.596729   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:20.600667   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:20.604504   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:20.604571   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:20.636801   51251 cri.go:89] found id: ""
	I1018 17:47:20.636826   51251 logs.go:282] 0 containers: []
	W1018 17:47:20.636835   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:20.636841   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:20.636919   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:20.663088   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:20.663107   51251 cri.go:89] found id: ""
	I1018 17:47:20.663120   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:20.663175   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:20.666758   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:20.666830   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:20.693183   51251 cri.go:89] found id: ""
	I1018 17:47:20.693205   51251 logs.go:282] 0 containers: []
	W1018 17:47:20.693214   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:20.693223   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:20.693233   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:20.759707   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:20.751450   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.752024   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.753590   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.754259   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.755733   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:20.751450   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.752024   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.753590   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.754259   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:20.755733   13345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:20.759728   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:20.759743   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:20.820356   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:20.820393   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:20.855109   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:20.855142   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:20.933430   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:20.933470   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:20.961931   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:20.961959   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:21.002517   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:21.002558   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:21.019433   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:21.019511   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:21.047420   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:21.047495   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:21.079819   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:21.079893   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:21.155722   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:21.155759   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:23.766139   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:23.777085   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:23.777151   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:23.811684   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:23.811707   51251 cri.go:89] found id: ""
	I1018 17:47:23.811715   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:23.811770   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:23.817453   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:23.817525   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:23.844121   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:23.844141   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:23.844146   51251 cri.go:89] found id: ""
	I1018 17:47:23.844153   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:23.844213   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:23.847866   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:23.851438   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:23.851510   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:23.879002   51251 cri.go:89] found id: ""
	I1018 17:47:23.879067   51251 logs.go:282] 0 containers: []
	W1018 17:47:23.879082   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:23.879089   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:23.879148   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:23.905700   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:23.905722   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:23.905727   51251 cri.go:89] found id: ""
	I1018 17:47:23.905735   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:23.905838   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:23.909628   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:23.913950   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:23.914019   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:23.946272   51251 cri.go:89] found id: ""
	I1018 17:47:23.946347   51251 logs.go:282] 0 containers: []
	W1018 17:47:23.946362   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:23.946370   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:23.946428   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:23.982078   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:23.982100   51251 cri.go:89] found id: ""
	I1018 17:47:23.982109   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:23.982162   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:23.985823   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:23.985895   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:24.020838   51251 cri.go:89] found id: ""
	I1018 17:47:24.020863   51251 logs.go:282] 0 containers: []
	W1018 17:47:24.020872   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:24.020881   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:24.020895   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:24.049680   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:24.049704   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:24.114947   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:24.114984   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:24.157780   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:24.157811   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:24.187365   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:24.187391   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:24.272125   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:24.264460   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.265126   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.266121   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.266734   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.268444   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:24.264460   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.265126   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.266121   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.266734   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:24.268444   13521 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:24.272150   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:24.272162   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:24.351210   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:24.351246   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:24.379627   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:24.379654   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:24.459957   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:24.459991   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:24.490809   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:24.490834   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:24.594421   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:24.594457   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:27.106652   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:27.118797   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:27.118867   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:27.156694   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:27.156714   51251 cri.go:89] found id: ""
	I1018 17:47:27.156723   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:27.156776   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:27.160480   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:27.160550   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:27.187759   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:27.187780   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:27.187785   51251 cri.go:89] found id: ""
	I1018 17:47:27.187793   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:27.187855   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:27.191713   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:27.195093   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:27.195159   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:27.231641   51251 cri.go:89] found id: ""
	I1018 17:47:27.231663   51251 logs.go:282] 0 containers: []
	W1018 17:47:27.231671   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:27.231681   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:27.231737   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:27.259596   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:27.259614   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:27.259619   51251 cri.go:89] found id: ""
	I1018 17:47:27.259626   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:27.259678   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:27.263281   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:27.266728   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:27.266826   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:27.294104   51251 cri.go:89] found id: ""
	I1018 17:47:27.294127   51251 logs.go:282] 0 containers: []
	W1018 17:47:27.294139   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:27.294145   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:27.294205   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:27.321776   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:27.321798   51251 cri.go:89] found id: ""
	I1018 17:47:27.321806   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:27.321868   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:27.325558   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:27.325631   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:27.356639   51251 cri.go:89] found id: ""
	I1018 17:47:27.356666   51251 logs.go:282] 0 containers: []
	W1018 17:47:27.356674   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:27.356683   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:27.356694   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:27.462575   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:27.462610   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:27.529536   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:27.520733   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.521424   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.523093   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.523552   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.525157   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:27.520733   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.521424   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.523093   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.523552   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:27.525157   13624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:27.529559   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:27.529573   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:27.555154   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:27.555180   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:27.632084   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:27.632117   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:27.662590   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:27.662614   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:27.691692   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:27.691718   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:27.774358   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:27.774393   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:27.825515   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:27.825545   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:27.838343   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:27.838369   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:27.902992   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:27.903025   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:30.448737   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:30.460318   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:30.460398   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:30.488282   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:30.488306   51251 cri.go:89] found id: ""
	I1018 17:47:30.488314   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:30.488367   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:30.491908   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:30.491974   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:30.521041   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:30.521066   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:30.521071   51251 cri.go:89] found id: ""
	I1018 17:47:30.521079   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:30.521136   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:30.525103   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:30.528840   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:30.528916   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:30.562515   51251 cri.go:89] found id: ""
	I1018 17:47:30.562537   51251 logs.go:282] 0 containers: []
	W1018 17:47:30.562545   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:30.562551   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:30.562627   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:30.592562   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:30.592584   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:30.592589   51251 cri.go:89] found id: ""
	I1018 17:47:30.592596   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:30.592653   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:30.596706   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:30.600570   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:30.600692   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:30.627771   51251 cri.go:89] found id: ""
	I1018 17:47:30.627793   51251 logs.go:282] 0 containers: []
	W1018 17:47:30.627802   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:30.627808   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:30.627867   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:30.654477   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:30.654497   51251 cri.go:89] found id: ""
	I1018 17:47:30.654510   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:30.654565   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:30.658617   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:30.658686   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:30.689627   51251 cri.go:89] found id: ""
	I1018 17:47:30.689650   51251 logs.go:282] 0 containers: []
	W1018 17:47:30.689658   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:30.689667   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:30.689684   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:30.721050   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:30.721077   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:30.732370   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:30.732446   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:30.805446   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:30.796158   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.796640   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.798623   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.799026   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.800608   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:30.796158   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.796640   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.798623   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.799026   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:30.800608   13776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:30.805466   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:30.805478   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:30.830998   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:30.831024   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:30.906775   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:30.906811   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:30.940644   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:30.940671   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:31.026053   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:31.026089   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:31.137923   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:31.137966   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:31.233631   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:31.233668   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:31.264350   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:31.264374   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:33.793612   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:33.805648   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 17:47:33.805780   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 17:47:33.839954   51251 cri.go:89] found id: "707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:33.840025   51251 cri.go:89] found id: ""
	I1018 17:47:33.840058   51251 logs.go:282] 1 containers: [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4]
	I1018 17:47:33.840138   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:33.844129   51251 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 17:47:33.844243   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 17:47:33.871384   51251 cri.go:89] found id: "02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:33.871408   51251 cri.go:89] found id: "d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:33.871413   51251 cri.go:89] found id: ""
	I1018 17:47:33.871421   51251 logs.go:282] 2 containers: [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354]
	I1018 17:47:33.871476   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:33.875651   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:33.879420   51251 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 17:47:33.879516   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 17:47:33.905649   51251 cri.go:89] found id: ""
	I1018 17:47:33.905676   51251 logs.go:282] 0 containers: []
	W1018 17:47:33.905684   51251 logs.go:284] No container was found matching "coredns"
	I1018 17:47:33.905691   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 17:47:33.905749   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 17:47:33.934660   51251 cri.go:89] found id: "59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:33.934683   51251 cri.go:89] found id: "5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:33.934688   51251 cri.go:89] found id: ""
	I1018 17:47:33.934696   51251 logs.go:282] 2 containers: [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157]
	I1018 17:47:33.934780   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:33.938842   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:33.942670   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 17:47:33.942738   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 17:47:33.978544   51251 cri.go:89] found id: ""
	I1018 17:47:33.978568   51251 logs.go:282] 0 containers: []
	W1018 17:47:33.978576   51251 logs.go:284] No container was found matching "kube-proxy"
	I1018 17:47:33.978582   51251 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 17:47:33.978643   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 17:47:34.012312   51251 cri.go:89] found id: "9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:34.012389   51251 cri.go:89] found id: ""
	I1018 17:47:34.012468   51251 logs.go:282] 1 containers: [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9]
	I1018 17:47:34.012564   51251 ssh_runner.go:195] Run: which crictl
	I1018 17:47:34.016868   51251 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 17:47:34.017048   51251 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 17:47:34.044577   51251 cri.go:89] found id: ""
	I1018 17:47:34.044648   51251 logs.go:282] 0 containers: []
	W1018 17:47:34.044668   51251 logs.go:284] No container was found matching "kindnet"
	I1018 17:47:34.044692   51251 logs.go:123] Gathering logs for kube-apiserver [707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4] ...
	I1018 17:47:34.044729   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 707afb8644f81a3c4b5a60caccf73b6fa806c2f2bc6eadcf95feead762b240f4"
	I1018 17:47:34.072731   51251 logs.go:123] Gathering logs for container status ...
	I1018 17:47:34.072799   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 17:47:34.103949   51251 logs.go:123] Gathering logs for dmesg ...
	I1018 17:47:34.103978   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 17:47:34.117148   51251 logs.go:123] Gathering logs for describe nodes ...
	I1018 17:47:34.117176   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 17:47:34.197560   51251 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1018 17:47:34.184268   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.184883   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.186363   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.186832   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.188578   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1018 17:47:34.184268   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.184883   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.186363   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.186832   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1018 17:47:34.188578   13921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 17:47:34.197584   51251 logs.go:123] Gathering logs for etcd [02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768] ...
	I1018 17:47:34.197598   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 02dcbfbdea8d0a71f7bdf717e4e6a4f3dc4d44dde69ac88cd9c203f3c141c768"
	I1018 17:47:34.271679   51251 logs.go:123] Gathering logs for etcd [d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354] ...
	I1018 17:47:34.271712   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1a90b30ced34f5157a7ad6506df7332bc799e4b23f184cf286f6ac652aed354"
	I1018 17:47:34.306656   51251 logs.go:123] Gathering logs for kube-scheduler [59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c] ...
	I1018 17:47:34.306683   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 59eaf8ee40495e0cf0d24af62f5db9ac04b4618be6d8bef9eff2036b6958307c"
	I1018 17:47:34.386272   51251 logs.go:123] Gathering logs for kube-scheduler [5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157] ...
	I1018 17:47:34.386308   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5d43fc3f51e4db87da0d50f3bce770d2feda80a192df04a9c0d62099a55a1157"
	I1018 17:47:34.414077   51251 logs.go:123] Gathering logs for kube-controller-manager [9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9] ...
	I1018 17:47:34.414108   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 9fbf215d6eccd49efad7b0bd5d2284bcf3838b034174258899d32f40c97217c9"
	I1018 17:47:34.443807   51251 logs.go:123] Gathering logs for CRI-O ...
	I1018 17:47:34.443833   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 17:47:34.522683   51251 logs.go:123] Gathering logs for kubelet ...
	I1018 17:47:34.522719   51251 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 17:47:37.133400   51251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:47:37.147181   51251 out.go:203] 
	W1018 17:47:37.150020   51251 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1018 17:47:37.150063   51251 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1018 17:47:37.150073   51251 out.go:285] * Related issues:
	W1018 17:47:37.150088   51251 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1018 17:47:37.150102   51251 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1018 17:47:37.152991   51251 out.go:203] 
	
	
	==> CRI-O <==
	Oct 18 17:42:09 ha-181800 crio[664]: time="2025-10-18T17:42:09.20257717Z" level=info msg="Started container" PID=1382 containerID=20677c7e60d1996e5ef30701c2fa483c048319a013425dfed6187c287c0356bf description=kube-system/kindnet-72mvm/kindnet-cni id=83e6058c-c5b8-448d-b3d7-5186691986a4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9a75bfa4304b7995fa070b07859898cd617fcbbbf769fcdbda120cb3da5f1690
	Oct 18 17:42:09 ha-181800 crio[664]: time="2025-10-18T17:42:09.208099281Z" level=info msg="Started container" PID=1383 containerID=53b6059c5f00ad29bd734722047caa1917ada2ed5ac7284628e49ffa30dab92f description=kube-system/coredns-66bc5c9577-p7nbg/coredns id=943df95e-dbb8-484a-8f2a-243495bd2d36 name=/runtime.v1.RuntimeService/StartContainer sandboxID=399a3f557e994a4d64c7e77bfa57fcb97dec3f4f1b2ef3d5dcc06e92031fff33
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.111678023Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1ee0e455-5885-424a-be70-f38c74ac9b88 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.113151329Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7332cd08-d810-418f-9239-f994866438d4 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.115024796Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d765eb2e-c860-4fae-a3f2-643ee4144808 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.11532002Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.119986301Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.120167292Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/794fca1f203edd67ad13c746b10dd2dcd8837f7ca0cf411e1437cb8975c5cb1d/merged/etc/passwd: no such file or directory"
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.120189134Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/794fca1f203edd67ad13c746b10dd2dcd8837f7ca0cf411e1437cb8975c5cb1d/merged/etc/group: no such file or directory"
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.120431935Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.145840056Z" level=info msg="Created container a443aed43e21dadb519c5e91013a1d8eb554ae8abd04f5107863e313e372bdc7: kube-system/storage-provisioner/storage-provisioner" id=d765eb2e-c860-4fae-a3f2-643ee4144808 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.146767329Z" level=info msg="Starting container: a443aed43e21dadb519c5e91013a1d8eb554ae8abd04f5107863e313e372bdc7" id=7f29d364-0d5e-4652-9da1-74e15b27ef77 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 17:42:10 ha-181800 crio[664]: time="2025-10-18T17:42:10.148484142Z" level=info msg="Started container" PID=1447 containerID=a443aed43e21dadb519c5e91013a1d8eb554ae8abd04f5107863e313e372bdc7 description=kube-system/storage-provisioner/storage-provisioner id=7f29d364-0d5e-4652-9da1-74e15b27ef77 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c018680cc61b2fa252ffde6cc7588c2be7ef28b3a444122d3feed4e3f9e480f5
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.512333091Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.516220368Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.516254731Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.516276286Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.51949706Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.51953286Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.519558739Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.523529282Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.52356175Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.523584117Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.526772128Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:42:19 ha-181800 crio[664]: time="2025-10-18T17:42:19.526803677Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	a443aed43e21d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   5 minutes ago       Running             storage-provisioner       1                   c018680cc61b2       storage-provisioner                 kube-system
	53b6059c5f00a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Running             coredns                   1                   399a3f557e994       coredns-66bc5c9577-p7nbg            kube-system
	20677c7e60d19       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 minutes ago       Running             kindnet-cni               1                   9a75bfa4304b7       kindnet-72mvm                       kube-system
	f24a57e28db5a       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   5 minutes ago       Running             busybox                   1                   5e71cad12b779       busybox-7b57f96db7-fbwpv            default
	2e4a1f13e1162       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 minutes ago       Running             kube-proxy                1                   7fecbfb4c17d9       kube-proxy-stgvm                    kube-system
	2c69476db7a72       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 minutes ago       Running             coredns                   1                   578310fdfac47       coredns-66bc5c9577-f6v2w            kube-system
	96f0fa2b71bea       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Running             kube-controller-manager   4                   6555f89f5d7b8       kube-controller-manager-ha-181800   kube-system
	3c32a11f94c33       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Running             kube-apiserver            4                   e20726c2a8ebb       kube-apiserver-ha-181800            kube-system
	1ffdfbb5e9622       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Exited              kube-controller-manager   3                   6555f89f5d7b8       kube-controller-manager-ha-181800   kube-system
	933870b5e9434       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   6 minutes ago       Exited              kube-apiserver            3                   e20726c2a8ebb       kube-apiserver-ha-181800            kube-system
	dda012a63c45a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   7 minutes ago       Running             etcd                      1                   41b759ba439df       etcd-ha-181800                      kube-system
	ac8ef32697a35       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   7 minutes ago       Running             kube-vip                  0                   a52c5b125e763       kube-vip-ha-181800                  kube-system
	6e9b6c2f0e69c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   7 minutes ago       Running             kube-scheduler            1                   44df15c75598f       kube-scheduler-ha-181800            kube-system
	
	
	==> coredns [2c69476db7a72cef87d583347c986806259d1f8ec4d34537de08f030eed150f5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54621 - 11724 "HINFO IN 6166212655013536567.4042456242834438062. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026635361s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [53b6059c5f00ad29bd734722047caa1917ada2ed5ac7284628e49ffa30dab92f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36574 - 3492 "HINFO IN 4503061436688671475.4348845373689282768. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02623671s
	
	
	==> describe nodes <==
	Name:               ha-181800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T17_33_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:33:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:47:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 17:46:57 +0000   Sat, 18 Oct 2025 17:33:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 17:46:57 +0000   Sat, 18 Oct 2025 17:33:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 17:46:57 +0000   Sat, 18 Oct 2025 17:33:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 17:46:57 +0000   Sat, 18 Oct 2025 17:34:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-181800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                7dc9b150-98ed-4d4d-b680-5759a1e067a9
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-fbwpv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-f6v2w             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 coredns-66bc5c9577-p7nbg             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-ha-181800                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-72mvm                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-181800             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-181800    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-stgvm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-181800             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-181800                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m45s                  kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)      kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           14m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-181800 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   RegisteredNode           8m33s                  node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   Starting                 7m59s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m59s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m59s (x8 over 7m59s)  kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m59s (x8 over 7m59s)  kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m59s (x8 over 7m59s)  kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m5s                   node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	
	
	Name:               ha-181800-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T17_34_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:34:02 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:39:26 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 18 Oct 2025 17:39:16 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 18 Oct 2025 17:39:16 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 18 Oct 2025 17:39:16 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 18 Oct 2025 17:39:16 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-181800-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                b2dd8f24-78e0-4eba-8b0c-b12412f7af7d
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-cp9q6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-181800-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-86s8z                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-181800-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-181800-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-dpwpn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-181800-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-181800-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 13m                kube-proxy       
	  Normal   RegisteredNode           13m                node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           12m                node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-181800-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-181800-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  10m (x9 over 10m)  kubelet          Node ha-181800-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeNotReady             9m56s              node-controller  Node ha-181800-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        9m22s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           8m33s              node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           6m5s               node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   NodeNotReady             5m15s              node-controller  Node ha-181800-m02 status is now: NodeNotReady
	
	
	Name:               ha-181800-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T17_35_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:35:18 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:39:12 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 18 Oct 2025 17:38:02 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 18 Oct 2025 17:38:02 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 18 Oct 2025 17:38:02 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 18 Oct 2025 17:38:02 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-181800-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                4a1abf8a-63a3-4737-81ec-1878616c489b
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-lzcbm                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-181800-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-9qbbw                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-181800-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-181800-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-qsqmb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-181800-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-181800-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        12m    kube-proxy       
	  Normal  RegisteredNode  12m    node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal  RegisteredNode  12m    node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal  RegisteredNode  12m    node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal  RegisteredNode  8m33s  node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal  RegisteredNode  6m5s   node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal  NodeNotReady    5m15s  node-controller  Node ha-181800-m03 status is now: NodeNotReady
	
	
	Name:               ha-181800-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T17_36_11_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:36:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:39:13 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 18 Oct 2025 17:38:23 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 18 Oct 2025 17:38:23 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 18 Oct 2025 17:38:23 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 18 Oct 2025 17:38:23 +0000   Sat, 18 Oct 2025 17:42:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-181800-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                afc79373-b3a1-4495-8f28-5c3685ad131e
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-88bv7       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-proxy-fj4ww    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    11m (x3 over 11m)  kubelet          Node ha-181800-m04 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m (x3 over 11m)  kubelet          Node ha-181800-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     11m (x3 over 11m)  kubelet          Node ha-181800-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   NodeReady                11m                kubelet          Node ha-181800-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m33s              node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           6m5s               node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   NodeNotReady             5m15s              node-controller  Node ha-181800-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Oct18 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014995] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035776] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.808632] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.418900] kauditd_printk_skb: 36 callbacks suppressed
	[Oct18 17:12] overlayfs: idmapped layers are currently not supported
	[  +0.082393] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct18 17:18] overlayfs: idmapped layers are currently not supported
	[Oct18 17:19] overlayfs: idmapped layers are currently not supported
	[Oct18 17:33] overlayfs: idmapped layers are currently not supported
	[ +35.716082] overlayfs: idmapped layers are currently not supported
	[Oct18 17:35] overlayfs: idmapped layers are currently not supported
	[Oct18 17:36] overlayfs: idmapped layers are currently not supported
	[Oct18 17:37] overlayfs: idmapped layers are currently not supported
	[Oct18 17:39] overlayfs: idmapped layers are currently not supported
	[  +3.088699] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [dda012a63c45a5c37a124da696c59f0ac82f51c6728ee30f5a6b3a9df6f28b54] <==
	{"level":"warn","ts":"2025-10-18T17:47:51.988010Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:51.988920Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:51.994846Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:51.999781Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:52.013882Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:52.015487Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:52.024302Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:52.029483Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:52.033690Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:52.039007Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:52.048734Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:52.058344Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:52.065214Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:52.068783Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:52.073832Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:52.085731Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:52.097000Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:52.104870Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:52.108964Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:52.109170Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:52.113384Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:52.123773Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:52.153050Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:52.183970Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-10-18T17:47:52.209790Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:47:52 up  1:30,  0 user,  load average: 0.93, 0.99, 0.98
	Linux ha-181800 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [20677c7e60d1996e5ef30701c2fa483c048319a013425dfed6187c287c0356bf] <==
	I1018 17:47:19.513585       1 main.go:324] Node ha-181800-m04 has CIDR [10.244.3.0/24] 
	I1018 17:47:29.513013       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1018 17:47:29.513108       1 main.go:324] Node ha-181800-m03 has CIDR [10.244.2.0/24] 
	I1018 17:47:29.513281       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 17:47:29.513322       1 main.go:324] Node ha-181800-m04 has CIDR [10.244.3.0/24] 
	I1018 17:47:29.513420       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:47:29.513455       1 main.go:301] handling current node
	I1018 17:47:29.513491       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 17:47:29.513519       1 main.go:324] Node ha-181800-m02 has CIDR [10.244.1.0/24] 
	I1018 17:47:39.513042       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1018 17:47:39.513152       1 main.go:324] Node ha-181800-m03 has CIDR [10.244.2.0/24] 
	I1018 17:47:39.513341       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 17:47:39.513397       1 main.go:324] Node ha-181800-m04 has CIDR [10.244.3.0/24] 
	I1018 17:47:39.513497       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:47:39.513543       1 main.go:301] handling current node
	I1018 17:47:39.513578       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 17:47:39.513607       1 main.go:324] Node ha-181800-m02 has CIDR [10.244.1.0/24] 
	I1018 17:47:49.513073       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:47:49.513106       1 main.go:301] handling current node
	I1018 17:47:49.513121       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 17:47:49.513126       1 main.go:324] Node ha-181800-m02 has CIDR [10.244.1.0/24] 
	I1018 17:47:49.513289       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1018 17:47:49.513297       1 main.go:324] Node ha-181800-m03 has CIDR [10.244.2.0/24] 
	I1018 17:47:49.513351       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 17:47:49.513357       1 main.go:324] Node ha-181800-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [3c32a11f94c333ae590b8745e77ffbb92367453ca4e6aee44e0e906b14390ca9] <==
	I1018 17:41:42.012115       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 17:41:42.012379       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 17:41:42.012425       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 17:41:42.013814       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 17:41:42.013944       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 17:41:42.025145       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 17:41:42.025992       1 cache.go:39] Caches are synced for autoregister controller
	I1018 17:41:42.026156       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 17:41:42.026261       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 17:41:42.026295       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 17:41:42.026308       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 17:41:42.026410       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 17:41:42.027548       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 17:41:42.033558       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	W1018 17:41:42.048863       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1018 17:41:42.050261       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 17:41:42.067717       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1018 17:41:42.072232       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1018 17:41:42.729546       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1018 17:41:43.284542       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1018 17:41:45.808842       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 17:41:54.269828       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 17:41:54.405180       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 17:41:54.473862       1 controller.go:667] quota admission added evaluator for: deployments.apps
	W1018 17:42:03.284458       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	
	
	==> kube-apiserver [933870b5e943415b7ecac6fd98f8537b5e0e42b86569b4b7d319eff44a3da010] <==
	I1018 17:40:52.195862       1 server.go:150] Version: v1.34.1
	I1018 17:40:52.195974       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1018 17:40:52.812771       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1018 17:40:52.812808       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1018 17:40:52.812818       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1018 17:40:52.812823       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1018 17:40:52.812828       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1018 17:40:52.812832       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1018 17:40:52.812840       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1018 17:40:52.812844       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1018 17:40:52.812850       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1018 17:40:52.812854       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1018 17:40:52.812858       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1018 17:40:52.812862       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1018 17:40:52.829696       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 17:40:52.831179       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1018 17:40:52.831774       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1018 17:40:52.838589       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 17:40:52.845223       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1018 17:40:52.845250       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1018 17:40:52.845852       1 instance.go:239] Using reconciler: lease
	W1018 17:40:52.848887       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 17:41:12.829067       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 17:41:12.831182       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F1018 17:41:12.846964       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [1ffdfbb5e9622e4192714fed8bfa4ea7a73dcc053f130642d8e29a5c565ebea9] <==
	I1018 17:41:07.403597       1 serving.go:386] Generated self-signed cert in-memory
	I1018 17:41:08.625550       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1018 17:41:08.625581       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 17:41:08.627414       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1018 17:41:08.627750       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 17:41:08.627867       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 17:41:08.628008       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1018 17:41:23.855834       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-controller-manager [96f0fa2b71beaec136d643f232999f193a1e3a16d1ca723cfb31748694731abe] <==
	I1018 17:41:47.143192       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 17:41:47.146859       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 17:41:47.162191       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 17:41:47.167924       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 17:41:47.177964       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 17:41:47.178029       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 17:41:47.178094       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 17:41:47.178140       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 17:41:47.186626       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 17:41:47.187226       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 17:41:47.187330       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 17:41:47.187422       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800"
	I1018 17:41:47.187477       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800-m02"
	I1018 17:41:47.187509       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800-m03"
	I1018 17:41:47.187545       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800-m04"
	I1018 17:41:47.187570       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 17:41:47.188233       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 17:41:47.188405       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 17:41:47.187047       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 17:41:47.188792       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 17:41:47.189599       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 17:41:47.189657       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-181800-m04"
	I1018 17:41:47.193090       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 17:41:47.204060       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 17:42:37.382673       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="PartialDisruption"
	
	
	==> kube-proxy [2e4a1f13e11624e5f4250e6082edc23d03fdf1fc7644e45614e6cdfc5dd39e76] <==
	I1018 17:42:06.262094       1 server_linux.go:53] "Using iptables proxy"
	I1018 17:42:06.334558       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 17:42:06.434813       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 17:42:06.434860       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 17:42:06.434950       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 17:42:06.451883       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 17:42:06.451931       1 server_linux.go:132] "Using iptables Proxier"
	I1018 17:42:06.455099       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 17:42:06.455439       1 server.go:527] "Version info" version="v1.34.1"
	I1018 17:42:06.455461       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 17:42:06.457621       1 config.go:200] "Starting service config controller"
	I1018 17:42:06.457642       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 17:42:06.457661       1 config.go:106] "Starting endpoint slice config controller"
	I1018 17:42:06.457665       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 17:42:06.457677       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 17:42:06.457681       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 17:42:06.458386       1 config.go:309] "Starting node config controller"
	I1018 17:42:06.458405       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 17:42:06.458412       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 17:42:06.558355       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 17:42:06.558395       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 17:42:06.558458       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [6e9b6c2f0e69c56776af6be092e8313aef540b7319fd0664f3eb3f947353a66b] <==
	E1018 17:41:07.266841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 17:41:07.311343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 17:41:07.533447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 17:41:07.651007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 17:41:08.355495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 17:41:16.769551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 17:41:17.489724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 17:41:17.665056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 17:41:18.205960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 17:41:18.570146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 17:41:18.949283       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 17:41:21.873636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 17:41:21.969747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 17:41:22.140090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 17:41:23.503240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 17:41:24.328010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 17:41:25.411284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 17:41:25.991046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 17:41:26.048796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 17:41:27.484563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 17:41:28.014616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 17:41:28.168052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 17:41:29.601662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 17:41:31.989429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1018 17:42:01.134075       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.537384     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-p7nbg" podUID="9d361193-5b45-400e-8161-804fc30e7515"
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.541593     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-72mvm_kube-system(5edfc356-9d49-4895-b36a-06c2bd39155a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.541650     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-72mvm" podUID="5edfc356-9d49-4895-b36a-06c2bd39155a"
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.543446     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod busybox-7b57f96db7-fbwpv_default(58e37574-901f-46d4-bb33-2d0f7ae9c08c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.543484     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-7b57f96db7-fbwpv" podUID="58e37574-901f-46d4-bb33-2d0f7ae9c08c"
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.556129     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container storage-provisioner start failed in pod storage-provisioner_kube-system(3c6521cd-8e1b-46aa-96a3-39e475e1426c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.556318     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="3c6521cd-8e1b-46aa-96a3-39e475e1426c"
	Oct 18 17:41:54 ha-181800 kubelet[798]: W1018 17:41:54.573814     798 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/crio-578310fdfac473102b8772a2897f522e4e15e81fc4a884380a337b9e6d1aa5b2 WatchSource:0}: Error finding container 578310fdfac473102b8772a2897f522e4e15e81fc4a884380a337b9e6d1aa5b2: Status 404 returned error can't find the container with id 578310fdfac473102b8772a2897f522e4e15e81fc4a884380a337b9e6d1aa5b2
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.578568     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-f6v2w_kube-system(a1fbdf00-9636-43a5-b1ed-a98bcacb5537): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:54 ha-181800 kubelet[798]: E1018 17:41:54.578616     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-f6v2w" podUID="a1fbdf00-9636-43a5-b1ed-a98bcacb5537"
	Oct 18 17:41:55 ha-181800 kubelet[798]: I1018 17:41:55.114096     798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1a1eda2cde092be2eda0d8bef8f7ec3" path="/var/lib/kubelet/pods/a1a1eda2cde092be2eda0d8bef8f7ec3/volumes"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.433187     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-f6v2w_kube-system(a1fbdf00-9636-43a5-b1ed-a98bcacb5537): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.433245     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-f6v2w" podUID="a1fbdf00-9636-43a5-b1ed-a98bcacb5537"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.435023     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container coredns start failed in pod coredns-66bc5c9577-p7nbg_kube-system(9d361193-5b45-400e-8161-804fc30e7515): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.435148     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-66bc5c9577-p7nbg" podUID="9d361193-5b45-400e-8161-804fc30e7515"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.441863     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod busybox-7b57f96db7-fbwpv_default(58e37574-901f-46d4-bb33-2d0f7ae9c08c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.441915     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="default/busybox-7b57f96db7-fbwpv" podUID="58e37574-901f-46d4-bb33-2d0f7ae9c08c"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.445392     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-proxy start failed in pod kube-proxy-stgvm_kube-system(15b89226-91ae-478f-acfe-7841776b1377): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.445443     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-stgvm" podUID="15b89226-91ae-478f-acfe-7841776b1377"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.450521     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-72mvm_kube-system(5edfc356-9d49-4895-b36a-06c2bd39155a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.450564     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-72mvm" podUID="5edfc356-9d49-4895-b36a-06c2bd39155a"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.458132     798 kuberuntime_manager.go:1449] "Unhandled Error" err="container storage-provisioner start failed in pod storage-provisioner_kube-system(3c6521cd-8e1b-46aa-96a3-39e475e1426c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Oct 18 17:41:55 ha-181800 kubelet[798]: E1018 17:41:55.458255     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="3c6521cd-8e1b-46aa-96a3-39e475e1426c"
	Oct 18 17:42:53 ha-181800 kubelet[798]: E1018 17:42:53.045182     798 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f59bab0e4fe86f69836eb694e5d31105cca80fd917445482f23b6d46da571384\": container with ID starting with f59bab0e4fe86f69836eb694e5d31105cca80fd917445482f23b6d46da571384 not found: ID does not exist" containerID="f59bab0e4fe86f69836eb694e5d31105cca80fd917445482f23b6d46da571384"
	Oct 18 17:42:53 ha-181800 kubelet[798]: I1018 17:42:53.045240     798 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="f59bab0e4fe86f69836eb694e5d31105cca80fd917445482f23b6d46da571384" err="rpc error: code = NotFound desc = could not find container \"f59bab0e4fe86f69836eb694e5d31105cca80fd917445482f23b6d46da571384\": container with ID starting with f59bab0e4fe86f69836eb694e5d31105cca80fd917445482f23b6d46da571384 not found: ID does not exist"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-181800 -n ha-181800
helpers_test.go:269: (dbg) Run:  kubectl --context ha-181800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (6.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (13.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-181800 stop --alsologtostderr -v 5: (13.488679117s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5: exit status 7 (129.45219ms)

                                                
                                                
-- stdout --
	ha-181800
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-181800-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-181800-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-181800-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 17:48:09.199157   69432 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:48:09.199382   69432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:48:09.199409   69432 out.go:374] Setting ErrFile to fd 2...
	I1018 17:48:09.199427   69432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:48:09.199729   69432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:48:09.199953   69432 out.go:368] Setting JSON to false
	I1018 17:48:09.200013   69432 mustload.go:65] Loading cluster: ha-181800
	I1018 17:48:09.200055   69432 notify.go:220] Checking for updates...
	I1018 17:48:09.200472   69432 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:48:09.200505   69432 status.go:174] checking status of ha-181800 ...
	I1018 17:48:09.201086   69432 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:48:09.218914   69432 status.go:371] ha-181800 host status = "Stopped" (err=<nil>)
	I1018 17:48:09.218934   69432 status.go:384] host is not running, skipping remaining checks
	I1018 17:48:09.218946   69432 status.go:176] ha-181800 status: &{Name:ha-181800 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 17:48:09.218969   69432 status.go:174] checking status of ha-181800-m02 ...
	I1018 17:48:09.219266   69432 cli_runner.go:164] Run: docker container inspect ha-181800-m02 --format={{.State.Status}}
	I1018 17:48:09.238628   69432 status.go:371] ha-181800-m02 host status = "Stopped" (err=<nil>)
	I1018 17:48:09.238650   69432 status.go:384] host is not running, skipping remaining checks
	I1018 17:48:09.238656   69432 status.go:176] ha-181800-m02 status: &{Name:ha-181800-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 17:48:09.238674   69432 status.go:174] checking status of ha-181800-m03 ...
	I1018 17:48:09.238958   69432 cli_runner.go:164] Run: docker container inspect ha-181800-m03 --format={{.State.Status}}
	I1018 17:48:09.261197   69432 status.go:371] ha-181800-m03 host status = "Stopped" (err=<nil>)
	I1018 17:48:09.261219   69432 status.go:384] host is not running, skipping remaining checks
	I1018 17:48:09.261226   69432 status.go:176] ha-181800-m03 status: &{Name:ha-181800-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 17:48:09.261248   69432 status.go:174] checking status of ha-181800-m04 ...
	I1018 17:48:09.261565   69432 cli_runner.go:164] Run: docker container inspect ha-181800-m04 --format={{.State.Status}}
	I1018 17:48:09.278374   69432 status.go:371] ha-181800-m04 host status = "Stopped" (err=<nil>)
	I1018 17:48:09.278397   69432 status.go:384] host is not running, skipping remaining checks
	I1018 17:48:09.278405   69432 status.go:176] ha-181800-m04 status: &{Name:ha-181800-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5": ha-181800
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-181800-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-181800-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-181800-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5": ha-181800
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-181800-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-181800-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-181800-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5": ha-181800
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-181800-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-181800-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-181800-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-181800
helpers_test.go:243: (dbg) docker inspect ha-181800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2",
	        "Created": "2025-10-18T17:32:56.632116312Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 137,
	            "Error": "",
	            "StartedAt": "2025-10-18T17:39:46.245999615Z",
	            "FinishedAt": "2025-10-18T17:48:08.862033359Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/hostname",
	        "HostsPath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/hosts",
	        "LogPath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2-json.log",
	        "Name": "/ha-181800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-181800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-181800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2",
	                "LowerDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-181800",
	                "Source": "/var/lib/docker/volumes/ha-181800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-181800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-181800",
	                "name.minikube.sigs.k8s.io": "ha-181800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-181800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "903568cdf824d38f52cb9a58c116a852c83eb599cf8cc87e25ba21b593e45142",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-181800",
	                        "5743bf3218eb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-181800 -n ha-181800
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p ha-181800 -n ha-181800: exit status 7 (70.43276ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 7 (may be ok)
helpers_test.go:249: "ha-181800" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (13.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (177.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1018 17:50:00.530065    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-181800 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m52.672725428s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5
ha_test.go:568: (dbg) Done: out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5: (1.085191715s)
ha_test.go:573: status says not two control-plane nodes are present: args "out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5": ha-181800
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-181800-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-181800-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-181800-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:576: status says not three hosts are running: args "out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5": ha-181800
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-181800-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-181800-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-181800-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:579: status says not three kubelets are running: args "out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5": ha-181800
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-181800-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-181800-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-181800-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:582: status says not two apiservers are running: args "out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5": ha-181800
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-181800-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-181800-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-181800-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:599: expected 3 nodes Ready status to be True, got 
-- stdout --
	' True
	 True
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-181800
helpers_test.go:243: (dbg) docker inspect ha-181800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2",
	        "Created": "2025-10-18T17:32:56.632116312Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 69617,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T17:48:09.683613005Z",
	            "FinishedAt": "2025-10-18T17:48:08.862033359Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/hostname",
	        "HostsPath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/hosts",
	        "LogPath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2-json.log",
	        "Name": "/ha-181800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-181800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-181800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2",
	                "LowerDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-181800",
	                "Source": "/var/lib/docker/volumes/ha-181800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-181800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-181800",
	                "name.minikube.sigs.k8s.io": "ha-181800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4110ab73f7f9137e0eb013438b540b426c3fa9fedc1bed76ec7ffcc4fc35499f",
	            "SandboxKey": "/var/run/docker/netns/4110ab73f7f9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32818"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32819"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32822"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32820"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32821"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-181800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:81:2f:47:7d:4c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "903568cdf824d38f52cb9a58c116a852c83eb599cf8cc87e25ba21b593e45142",
	                    "EndpointID": "9a2af9d91b868a8642ef1db81d818bc623c9c1134408c932f61ec269578e0c92",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-181800",
	                        "5743bf3218eb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-181800 -n ha-181800
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-181800 logs -n 25: (1.811043775s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-181800 cp ha-181800-m03:/home/docker/cp-test.txt ha-181800-m04:/home/docker/cp-test_ha-181800-m03_ha-181800-m04.txt               │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test_ha-181800-m03_ha-181800-m04.txt                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp testdata/cp-test.txt ha-181800-m04:/home/docker/cp-test.txt                                                             │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1463328482/001/cp-test_ha-181800-m04.txt │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt ha-181800:/home/docker/cp-test_ha-181800-m04_ha-181800.txt                       │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800 sudo cat /home/docker/cp-test_ha-181800-m04_ha-181800.txt                                                 │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt ha-181800-m02:/home/docker/cp-test_ha-181800-m04_ha-181800-m02.txt               │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m02 sudo cat /home/docker/cp-test_ha-181800-m04_ha-181800-m02.txt                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt ha-181800-m03:/home/docker/cp-test_ha-181800-m04_ha-181800-m03.txt               │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m03 sudo cat /home/docker/cp-test_ha-181800-m04_ha-181800-m03.txt                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ node    │ ha-181800 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ node    │ ha-181800 node start m02 --alsologtostderr -v 5                                                                                      │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:39 UTC │
	│ node    │ ha-181800 node list --alsologtostderr -v 5                                                                                           │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:39 UTC │                     │
	│ stop    │ ha-181800 stop --alsologtostderr -v 5                                                                                                │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:39 UTC │ 18 Oct 25 17:39 UTC │
	│ start   │ ha-181800 start --wait true --alsologtostderr -v 5                                                                                   │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:39 UTC │                     │
	│ node    │ ha-181800 node list --alsologtostderr -v 5                                                                                           │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:47 UTC │                     │
	│ node    │ ha-181800 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:47 UTC │                     │
	│ stop    │ ha-181800 stop --alsologtostderr -v 5                                                                                                │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:47 UTC │ 18 Oct 25 17:48 UTC │
	│ start   │ ha-181800 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:48 UTC │ 18 Oct 25 17:51 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 17:48:09
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 17:48:09.416034   69488 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:48:09.416413   69488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:48:09.416429   69488 out.go:374] Setting ErrFile to fd 2...
	I1018 17:48:09.416435   69488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:48:09.416751   69488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:48:09.417210   69488 out.go:368] Setting JSON to false
	I1018 17:48:09.418048   69488 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5439,"bootTime":1760804251,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 17:48:09.418116   69488 start.go:141] virtualization:  
	I1018 17:48:09.421406   69488 out.go:179] * [ha-181800] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 17:48:09.425201   69488 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 17:48:09.425270   69488 notify.go:220] Checking for updates...
	I1018 17:48:09.431395   69488 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 17:48:09.434249   69488 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:48:09.437177   69488 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 17:48:09.439990   69488 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 17:48:09.442873   69488 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 17:48:09.446186   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:48:09.446753   69488 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 17:48:09.469689   69488 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 17:48:09.469810   69488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:48:09.525756   69488 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 17:48:09.516473467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:48:09.525901   69488 docker.go:318] overlay module found
	I1018 17:48:09.529121   69488 out.go:179] * Using the docker driver based on existing profile
	I1018 17:48:09.532020   69488 start.go:305] selected driver: docker
	I1018 17:48:09.532065   69488 start.go:925] validating driver "docker" against &{Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:48:09.532200   69488 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 17:48:09.532300   69488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:48:09.595274   69488 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 17:48:09.586260967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:48:09.595672   69488 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:48:09.595711   69488 cni.go:84] Creating CNI manager for ""
	I1018 17:48:09.595769   69488 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1018 17:48:09.595821   69488 start.go:349] cluster config:
	{Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:48:09.600762   69488 out.go:179] * Starting "ha-181800" primary control-plane node in "ha-181800" cluster
	I1018 17:48:09.603624   69488 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:48:09.606573   69488 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:48:09.609415   69488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:48:09.609455   69488 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:48:09.609472   69488 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 17:48:09.609485   69488 cache.go:58] Caching tarball of preloaded images
	I1018 17:48:09.609580   69488 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:48:09.609590   69488 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:48:09.609731   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:48:09.629715   69488 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:48:09.629738   69488 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:48:09.629751   69488 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:48:09.629773   69488 start.go:360] acquireMachinesLock for ha-181800: {Name:mk3f5dfba2ab7d01f94f924dfcc5edab5f076901 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:48:09.629829   69488 start.go:364] duration metric: took 36.414µs to acquireMachinesLock for "ha-181800"
	I1018 17:48:09.629854   69488 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:48:09.629859   69488 fix.go:54] fixHost starting: 
	I1018 17:48:09.630111   69488 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:48:09.646601   69488 fix.go:112] recreateIfNeeded on ha-181800: state=Stopped err=<nil>
	W1018 17:48:09.646633   69488 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:48:09.649905   69488 out.go:252] * Restarting existing docker container for "ha-181800" ...
	I1018 17:48:09.649988   69488 cli_runner.go:164] Run: docker start ha-181800
	I1018 17:48:09.903186   69488 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:48:09.925021   69488 kic.go:430] container "ha-181800" state is running.
	I1018 17:48:09.925620   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:48:09.948773   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:48:09.949327   69488 machine.go:93] provisionDockerMachine start ...
	I1018 17:48:09.949403   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:09.972918   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:09.973247   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1018 17:48:09.973265   69488 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:48:09.973813   69488 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 17:48:13.124675   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800
	
	I1018 17:48:13.124706   69488 ubuntu.go:182] provisioning hostname "ha-181800"
	I1018 17:48:13.124768   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:13.142493   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:13.142802   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1018 17:48:13.142819   69488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181800 && echo "ha-181800" | sudo tee /etc/hostname
	I1018 17:48:13.298978   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800
	
	I1018 17:48:13.299071   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:13.318549   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:13.318864   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1018 17:48:13.318885   69488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181800/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:48:13.464891   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:48:13.464913   69488 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:48:13.464930   69488 ubuntu.go:190] setting up certificates
	I1018 17:48:13.464957   69488 provision.go:84] configureAuth start
	I1018 17:48:13.465015   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:48:13.482208   69488 provision.go:143] copyHostCerts
	I1018 17:48:13.482250   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:48:13.482283   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:48:13.482302   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:48:13.482377   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:48:13.482463   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:48:13.482486   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:48:13.482493   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:48:13.482520   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:48:13.482562   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:48:13.482582   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:48:13.482588   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:48:13.482612   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:48:13.482660   69488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.ha-181800 san=[127.0.0.1 192.168.49.2 ha-181800 localhost minikube]
	I1018 17:48:14.423915   69488 provision.go:177] copyRemoteCerts
	I1018 17:48:14.423988   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:48:14.424038   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:14.441172   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:48:14.544666   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 17:48:14.544730   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1018 17:48:14.562271   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 17:48:14.562355   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:48:14.579774   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 17:48:14.579882   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:48:14.597738   69488 provision.go:87] duration metric: took 1.132758135s to configureAuth
	I1018 17:48:14.597766   69488 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:48:14.598014   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:48:14.598118   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:14.616530   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:14.616832   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1018 17:48:14.616852   69488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:48:14.938623   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:48:14.938694   69488 machine.go:96] duration metric: took 4.989343324s to provisionDockerMachine
	I1018 17:48:14.938719   69488 start.go:293] postStartSetup for "ha-181800" (driver="docker")
	I1018 17:48:14.938743   69488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:48:14.938827   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:48:14.938907   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:14.961006   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:48:15.069145   69488 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:48:15.072788   69488 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:48:15.072820   69488 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:48:15.072832   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:48:15.072889   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:48:15.073008   69488 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:48:15.073020   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 17:48:15.073124   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 17:48:15.080710   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:48:15.098679   69488 start.go:296] duration metric: took 159.932309ms for postStartSetup
	I1018 17:48:15.098839   69488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:48:15.098888   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:15.116684   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:48:15.217789   69488 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:48:15.222543   69488 fix.go:56] duration metric: took 5.59267659s for fixHost
	I1018 17:48:15.222570   69488 start.go:83] releasing machines lock for "ha-181800", held for 5.59272729s
	I1018 17:48:15.222640   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:48:15.239602   69488 ssh_runner.go:195] Run: cat /version.json
	I1018 17:48:15.239657   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:15.239935   69488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:48:15.239989   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:15.258489   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:48:15.259704   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:48:15.360628   69488 ssh_runner.go:195] Run: systemctl --version
	I1018 17:48:15.453252   69488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:48:15.490459   69488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:48:15.494882   69488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:48:15.494987   69488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:48:15.502526   69488 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:48:15.502555   69488 start.go:495] detecting cgroup driver to use...
	I1018 17:48:15.502585   69488 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:48:15.502634   69488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:48:15.518083   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:48:15.531171   69488 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:48:15.531254   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:48:15.547013   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:48:15.559697   69488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:48:15.666369   69488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:48:15.774518   69488 docker.go:234] disabling docker service ...
	I1018 17:48:15.774580   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:48:15.789730   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:48:15.802288   69488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:48:15.919408   69488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:48:16.029842   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:48:16.043317   69488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:48:16.059310   69488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:48:16.059453   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.069280   69488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:48:16.069350   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.078814   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.087874   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.097837   69488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:48:16.106890   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.115708   69488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.123935   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.132770   69488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:48:16.140320   69488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:48:16.147761   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:48:16.260916   69488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:48:16.404712   69488 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:48:16.404830   69488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:48:16.408509   69488 start.go:563] Will wait 60s for crictl version
	I1018 17:48:16.408623   69488 ssh_runner.go:195] Run: which crictl
	I1018 17:48:16.411907   69488 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:48:16.435137   69488 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:48:16.435295   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:48:16.466039   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:48:16.501936   69488 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:48:16.504878   69488 cli_runner.go:164] Run: docker network inspect ha-181800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:48:16.520780   69488 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:48:16.524665   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:48:16.534613   69488 kubeadm.go:883] updating cluster {Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 17:48:16.534762   69488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:48:16.534819   69488 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 17:48:16.574503   69488 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 17:48:16.574531   69488 crio.go:433] Images already preloaded, skipping extraction
	I1018 17:48:16.574590   69488 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 17:48:16.600203   69488 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 17:48:16.600227   69488 cache_images.go:85] Images are preloaded, skipping loading
	I1018 17:48:16.600237   69488 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 17:48:16.600342   69488 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-181800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:48:16.600422   69488 ssh_runner.go:195] Run: crio config
	I1018 17:48:16.665910   69488 cni.go:84] Creating CNI manager for ""
	I1018 17:48:16.665937   69488 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1018 17:48:16.665961   69488 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 17:48:16.665986   69488 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-181800 NodeName:ha-181800 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 17:48:16.666112   69488 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-181800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 17:48:16.666132   69488 kube-vip.go:115] generating kube-vip config ...
	I1018 17:48:16.666191   69488 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 17:48:16.678158   69488 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:48:16.678333   69488 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 17:48:16.678419   69488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:48:16.686215   69488 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:48:16.686327   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1018 17:48:16.693873   69488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1018 17:48:16.706512   69488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:48:16.719311   69488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1018 17:48:16.731738   69488 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 17:48:16.744107   69488 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 17:48:16.747479   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:48:16.756979   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:48:16.873983   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:48:16.890078   69488 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800 for IP: 192.168.49.2
	I1018 17:48:16.890141   69488 certs.go:195] generating shared ca certs ...
	I1018 17:48:16.890170   69488 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:48:16.890342   69488 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:48:16.890408   69488 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:48:16.890429   69488 certs.go:257] generating profile certs ...
	I1018 17:48:16.890571   69488 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key
	I1018 17:48:16.890683   69488 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.46a58690
	I1018 17:48:16.890745   69488 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key
	I1018 17:48:16.890767   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 17:48:16.890806   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 17:48:16.890839   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 17:48:16.890866   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 17:48:16.890905   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 17:48:16.890937   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 17:48:16.890965   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 17:48:16.891003   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 17:48:16.891075   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:48:16.891135   69488 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:48:16.891163   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:48:16.891206   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:48:16.891265   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:48:16.891308   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:48:16.891389   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:48:16.891447   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 17:48:16.891488   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:48:16.891521   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 17:48:16.892071   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:48:16.910107   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:48:16.927560   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:48:16.944252   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:48:16.961007   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 17:48:16.981715   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 17:48:17.002129   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 17:48:17.028151   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 17:48:17.050134   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:48:17.076842   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:48:17.102342   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:48:17.120809   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 17:48:17.135197   69488 ssh_runner.go:195] Run: openssl version
	I1018 17:48:17.141316   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:48:17.149779   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:48:17.156384   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:48:17.156498   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:48:17.198104   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:48:17.206025   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:48:17.214061   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:48:17.217558   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:48:17.217636   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:48:17.259653   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:48:17.267330   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:48:17.275410   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:48:17.278912   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:48:17.279004   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:48:17.319663   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:48:17.327893   69488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:48:17.331787   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 17:48:17.372669   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 17:48:17.413640   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 17:48:17.455669   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 17:48:17.503310   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 17:48:17.553128   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 17:48:17.610923   69488 kubeadm.go:400] StartCluster: {Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:48:17.611069   69488 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:48:17.611141   69488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:48:17.693793   69488 cri.go:89] found id: "42139c5070f82bb1e1dd7564661f58a74b134ab219b910335d022b2235e65fc0"
	I1018 17:48:17.693817   69488 cri.go:89] found id: "405d4b2711179ef2be985a5942049e2e36688b992d1fd9f96f2e882cfa95bfd5"
	I1018 17:48:17.693822   69488 cri.go:89] found id: "fb83e2f9880f48e77ccba9ff1a0240a5eacc8c5f0b7758c70e7c19289ba8795a"
	I1018 17:48:17.693826   69488 cri.go:89] found id: ""
	I1018 17:48:17.693886   69488 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 17:48:17.727781   69488 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:48:17Z" level=error msg="open /run/runc: no such file or directory"
	I1018 17:48:17.727885   69488 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 17:48:17.752985   69488 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 17:48:17.753011   69488 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 17:48:17.753077   69488 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 17:48:17.766549   69488 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:48:17.766998   69488 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-181800" does not appear in /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:48:17.767116   69488 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-2509/kubeconfig needs updating (will repair): [kubeconfig missing "ha-181800" cluster setting kubeconfig missing "ha-181800" context setting]
	I1018 17:48:17.767408   69488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:48:17.768000   69488 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 17:48:17.768691   69488 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 17:48:17.768713   69488 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 17:48:17.768754   69488 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1018 17:48:17.768718   69488 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 17:48:17.768800   69488 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 17:48:17.768817   69488 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 17:48:17.769158   69488 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 17:48:17.777893   69488 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1018 17:48:17.777928   69488 kubeadm.go:601] duration metric: took 24.910349ms to restartPrimaryControlPlane
	I1018 17:48:17.777937   69488 kubeadm.go:402] duration metric: took 167.022952ms to StartCluster
	I1018 17:48:17.777952   69488 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:48:17.778019   69488 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:48:17.778655   69488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:48:17.778876   69488 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:48:17.778908   69488 start.go:241] waiting for startup goroutines ...
	I1018 17:48:17.778916   69488 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 17:48:17.779460   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:48:17.784791   69488 out.go:179] * Enabled addons: 
	I1018 17:48:17.787780   69488 addons.go:514] duration metric: took 8.843165ms for enable addons: enabled=[]
	I1018 17:48:17.787841   69488 start.go:246] waiting for cluster config update ...
	I1018 17:48:17.787851   69488 start.go:255] writing updated cluster config ...
	I1018 17:48:17.791154   69488 out.go:203] 
	I1018 17:48:17.794423   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:48:17.794545   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:48:17.797951   69488 out.go:179] * Starting "ha-181800-m02" control-plane node in "ha-181800" cluster
	I1018 17:48:17.800906   69488 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:48:17.803852   69488 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:48:17.806813   69488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:48:17.806848   69488 cache.go:58] Caching tarball of preloaded images
	I1018 17:48:17.806951   69488 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:48:17.806966   69488 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:48:17.807089   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:48:17.807301   69488 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:48:17.833480   69488 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:48:17.833505   69488 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:48:17.833520   69488 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:48:17.833542   69488 start.go:360] acquireMachinesLock for ha-181800-m02: {Name:mk36a488c0fbfc8557c6ba291b969aad85b45635 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:48:17.833604   69488 start.go:364] duration metric: took 42.142µs to acquireMachinesLock for "ha-181800-m02"
	I1018 17:48:17.833629   69488 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:48:17.833638   69488 fix.go:54] fixHost starting: m02
	I1018 17:48:17.833888   69488 cli_runner.go:164] Run: docker container inspect ha-181800-m02 --format={{.State.Status}}
	I1018 17:48:17.853969   69488 fix.go:112] recreateIfNeeded on ha-181800-m02: state=Stopped err=<nil>
	W1018 17:48:17.853999   69488 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:48:17.859511   69488 out.go:252] * Restarting existing docker container for "ha-181800-m02" ...
	I1018 17:48:17.859599   69488 cli_runner.go:164] Run: docker start ha-181800-m02
	I1018 17:48:18.199583   69488 cli_runner.go:164] Run: docker container inspect ha-181800-m02 --format={{.State.Status}}
	I1018 17:48:18.226549   69488 kic.go:430] container "ha-181800-m02" state is running.
	I1018 17:48:18.226893   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:48:18.262995   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:48:18.263226   69488 machine.go:93] provisionDockerMachine start ...
	I1018 17:48:18.263282   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:48:18.293143   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:18.293452   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1018 17:48:18.293466   69488 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:48:18.294119   69488 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 17:48:21.560416   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m02
	
	I1018 17:48:21.560480   69488 ubuntu.go:182] provisioning hostname "ha-181800-m02"
	I1018 17:48:21.560583   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:48:21.588400   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:21.588705   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1018 17:48:21.588717   69488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181800-m02 && echo "ha-181800-m02" | sudo tee /etc/hostname
	I1018 17:48:21.918738   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m02
	
	I1018 17:48:21.918888   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:48:21.950544   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:21.950842   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1018 17:48:21.950857   69488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181800-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181800-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181800-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:48:22.217685   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:48:22.217712   69488 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:48:22.217727   69488 ubuntu.go:190] setting up certificates
	I1018 17:48:22.217741   69488 provision.go:84] configureAuth start
	I1018 17:48:22.217804   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:48:22.255770   69488 provision.go:143] copyHostCerts
	I1018 17:48:22.255810   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:48:22.255843   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:48:22.255850   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:48:22.255928   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:48:22.255999   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:48:22.256017   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:48:22.256021   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:48:22.256045   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:48:22.256080   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:48:22.256096   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:48:22.256100   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:48:22.256121   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:48:22.256204   69488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.ha-181800-m02 san=[127.0.0.1 192.168.49.3 ha-181800-m02 localhost minikube]
	I1018 17:48:22.398509   69488 provision.go:177] copyRemoteCerts
	I1018 17:48:22.398627   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:48:22.398703   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:48:22.417071   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:48:22.539435   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 17:48:22.539497   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:48:22.590740   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 17:48:22.590799   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:48:22.640636   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 17:48:22.640749   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 17:48:22.682470   69488 provision.go:87] duration metric: took 464.715425ms to configureAuth
	I1018 17:48:22.682541   69488 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:48:22.682832   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:48:22.682993   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:48:22.710684   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:22.710986   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1018 17:48:22.711001   69488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:49:53.355970   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:49:53.355994   69488 machine.go:96] duration metric: took 1m35.092758423s to provisionDockerMachine
	I1018 17:49:53.356005   69488 start.go:293] postStartSetup for "ha-181800-m02" (driver="docker")
	I1018 17:49:53.356016   69488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:49:53.356073   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:49:53.356118   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:49:53.374240   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:49:53.476619   69488 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:49:53.479822   69488 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:49:53.479849   69488 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:49:53.479860   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:49:53.479932   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:49:53.480042   69488 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:49:53.480053   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 17:49:53.480150   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 17:49:53.487506   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:49:53.503781   69488 start.go:296] duration metric: took 147.726679ms for postStartSetup
	I1018 17:49:53.503861   69488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:49:53.503907   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:49:53.521965   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:49:53.622051   69488 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:49:53.627407   69488 fix.go:56] duration metric: took 1m35.793761422s for fixHost
	I1018 17:49:53.627431   69488 start.go:83] releasing machines lock for "ha-181800-m02", held for 1m35.793813517s
	I1018 17:49:53.627503   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:49:53.647527   69488 out.go:179] * Found network options:
	I1018 17:49:53.650482   69488 out.go:179]   - NO_PROXY=192.168.49.2
	W1018 17:49:53.653336   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:49:53.653390   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	I1018 17:49:53.653464   69488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:49:53.653510   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:49:53.653793   69488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:49:53.653863   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:49:53.671905   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:49:53.683540   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:49:53.861179   69488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:49:53.865770   69488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:49:53.865856   69488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:49:53.873670   69488 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:49:53.873694   69488 start.go:495] detecting cgroup driver to use...
	I1018 17:49:53.873745   69488 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:49:53.873813   69488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:49:53.888526   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:49:53.901761   69488 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:49:53.901850   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:49:53.917699   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:49:53.931789   69488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:49:54.071500   69488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:49:54.203057   69488 docker.go:234] disabling docker service ...
	I1018 17:49:54.203122   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:49:54.218563   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:49:54.232433   69488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:49:54.361440   69488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:49:54.490330   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:49:54.503221   69488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:49:54.517805   69488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:49:54.517883   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.527169   69488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:49:54.527231   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.536041   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.544703   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.553243   69488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:49:54.562614   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.571510   69488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.579788   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.588456   69488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:49:54.595820   69488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:49:54.602817   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:49:54.728528   69488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:49:58.621131   69488 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.89256859s)
	I1018 17:49:58.626115   69488 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:49:58.626223   69488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:49:58.631167   69488 start.go:563] Will wait 60s for crictl version
	I1018 17:49:58.631232   69488 ssh_runner.go:195] Run: which crictl
	I1018 17:49:58.639191   69488 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:49:58.672795   69488 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:49:58.672878   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:49:58.723386   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:49:58.777499   69488 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:49:58.780571   69488 out.go:179]   - env NO_PROXY=192.168.49.2
	I1018 17:49:58.783632   69488 cli_runner.go:164] Run: docker network inspect ha-181800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:49:58.815077   69488 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:49:58.819329   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:49:58.831215   69488 mustload.go:65] Loading cluster: ha-181800
	I1018 17:49:58.831449   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:49:58.831716   69488 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:49:58.862708   69488 host.go:66] Checking if "ha-181800" exists ...
	I1018 17:49:58.863022   69488 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800 for IP: 192.168.49.3
	I1018 17:49:58.863040   69488 certs.go:195] generating shared ca certs ...
	I1018 17:49:58.863058   69488 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:49:58.863172   69488 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:49:58.863215   69488 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:49:58.863222   69488 certs.go:257] generating profile certs ...
	I1018 17:49:58.863290   69488 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key
	I1018 17:49:58.863337   69488 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.887e0b27
	I1018 17:49:58.863381   69488 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key
	I1018 17:49:58.863390   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 17:49:58.863402   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 17:49:58.863414   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 17:49:58.863425   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 17:49:58.863435   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 17:49:58.863448   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 17:49:58.863470   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 17:49:58.863481   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 17:49:58.863531   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:49:58.863559   69488 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:49:58.863567   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:49:58.863589   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:49:58.863615   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:49:58.863635   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:49:58.863676   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:49:58.863709   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 17:49:58.863731   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:49:58.863743   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 17:49:58.863871   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:49:58.882935   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:49:58.981280   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1018 17:49:58.984884   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1018 17:49:58.992968   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1018 17:49:58.996547   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1018 17:49:59.005742   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1018 17:49:59.009863   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1018 17:49:59.018651   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1018 17:49:59.022300   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1018 17:49:59.030647   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1018 17:49:59.034128   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1018 17:49:59.042303   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1018 17:49:59.045696   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1018 17:49:59.054134   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:49:59.072336   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:49:59.090250   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:49:59.107793   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:49:59.124795   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 17:49:59.150615   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 17:49:59.169033   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 17:49:59.186177   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 17:49:59.203120   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:49:59.220145   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:49:59.237999   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:49:59.257279   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1018 17:49:59.269634   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1018 17:49:59.282735   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1018 17:49:59.295341   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1018 17:49:59.308329   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1018 17:49:59.320556   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1018 17:49:59.332714   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1018 17:49:59.348902   69488 ssh_runner.go:195] Run: openssl version
	I1018 17:49:59.356738   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:49:59.365172   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:49:59.368839   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:49:59.368976   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:49:59.414784   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:49:59.422423   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:49:59.430191   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:49:59.433619   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:49:59.433727   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:49:59.474255   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:49:59.481911   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:49:59.490061   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:49:59.493763   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:49:59.493835   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:49:59.534567   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:49:59.542475   69488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:49:59.546230   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 17:49:59.592499   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 17:49:59.635764   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 17:49:59.676750   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 17:49:59.719668   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 17:49:59.760653   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 17:49:59.801453   69488 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1018 17:49:59.801594   69488 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-181800-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:49:59.801625   69488 kube-vip.go:115] generating kube-vip config ...
	I1018 17:49:59.801676   69488 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 17:49:59.813138   69488 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:49:59.813221   69488 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 17:49:59.813313   69488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:49:59.820930   69488 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:49:59.821061   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1018 17:49:59.828485   69488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 17:49:59.840643   69488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:49:59.853675   69488 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 17:49:59.867836   69488 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 17:49:59.871456   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:49:59.881052   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:00.019627   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:50:00.063785   69488 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:50:00.065404   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:00.068131   69488 out.go:179] * Verifying Kubernetes components...
	I1018 17:50:00.071263   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:00.372789   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:50:00.393030   69488 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1018 17:50:00.393170   69488 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1018 17:50:00.393487   69488 node_ready.go:35] waiting up to 6m0s for node "ha-181800-m02" to be "Ready" ...
	W1018 17:50:02.394400   69488 node_ready.go:55] error getting node "ha-181800-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-181800-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1018 17:50:08.470080   69488 node_ready.go:57] node "ha-181800-m02" has "Ready":"Unknown" status (will retry)
	I1018 17:50:09.421305   69488 node_ready.go:49] node "ha-181800-m02" is "Ready"
	I1018 17:50:09.421384   69488 node_ready.go:38] duration metric: took 9.02787205s for node "ha-181800-m02" to be "Ready" ...
	I1018 17:50:09.421422   69488 api_server.go:52] waiting for apiserver process to appear ...
	I1018 17:50:09.421500   69488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:50:09.447456   69488 api_server.go:72] duration metric: took 9.383624261s to wait for apiserver process to appear ...
	I1018 17:50:09.447520   69488 api_server.go:88] waiting for apiserver healthz status ...
	I1018 17:50:09.447553   69488 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 17:50:09.466347   69488 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 17:50:09.466422   69488 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 17:50:09.947999   69488 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 17:50:09.958418   69488 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 17:50:09.958509   69488 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 17:50:10.447814   69488 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 17:50:10.462608   69488 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 17:50:10.463984   69488 api_server.go:141] control plane version: v1.34.1
	I1018 17:50:10.464041   69488 api_server.go:131] duration metric: took 1.016500993s to wait for apiserver health ...
	I1018 17:50:10.464067   69488 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 17:50:10.483197   69488 system_pods.go:59] 26 kube-system pods found
	I1018 17:50:10.483289   69488 system_pods.go:61] "coredns-66bc5c9577-f6v2w" [a1fbdf00-9636-43a5-b1ed-a98bcacb5537] Running
	I1018 17:50:10.483312   69488 system_pods.go:61] "coredns-66bc5c9577-p7nbg" [9d361193-5b45-400e-8161-804fc30e7515] Running
	I1018 17:50:10.483343   69488 system_pods.go:61] "etcd-ha-181800" [3aafeb42-d09a-4b84-9739-e25adc3a4135] Running
	I1018 17:50:10.483363   69488 system_pods.go:61] "etcd-ha-181800-m02" [194d8d52-b9b6-43ae-8c1f-01b965d3ae96] Running
	I1018 17:50:10.483380   69488 system_pods.go:61] "etcd-ha-181800-m03" [f52cd0ee-6f99-49ba-8c4f-218b8d166fe2] Running
	I1018 17:50:10.483399   69488 system_pods.go:61] "kindnet-72mvm" [5edfc356-9d49-4895-b36a-06c2bd39155a] Running
	I1018 17:50:10.483417   69488 system_pods.go:61] "kindnet-86s8z" [6559ac9e-c73d-4d49-a0e1-87d630e5bec8] Running
	I1018 17:50:10.483439   69488 system_pods.go:61] "kindnet-88bv7" [3b3b9715-1e6e-4046-adae-f372381e068a] Running
	I1018 17:50:10.483466   69488 system_pods.go:61] "kindnet-9qbbw" [d1a305ed-4a0e-4ccc-90e0-04577ad4e5c4] Running
	I1018 17:50:10.483486   69488 system_pods.go:61] "kube-apiserver-ha-181800" [4966738e-d055-404d-82ad-0d3f23ef0337] Running
	I1018 17:50:10.483506   69488 system_pods.go:61] "kube-apiserver-ha-181800-m02" [344fc499-0c04-4f86-a919-3c2da1e7a1e6] Running
	I1018 17:50:10.483524   69488 system_pods.go:61] "kube-apiserver-ha-181800-m03" [ce72f944-adc2-46a9-a83c-dc75936c3e9c] Running
	I1018 17:50:10.483543   69488 system_pods.go:61] "kube-controller-manager-ha-181800" [9a4be61b-4ecc-46da-86a1-472b6da720b9] Running
	I1018 17:50:10.483573   69488 system_pods.go:61] "kube-controller-manager-ha-181800-m02" [6a519ce2-92dc-4003-8f1a-6d818fea6da3] Running
	I1018 17:50:10.483593   69488 system_pods.go:61] "kube-controller-manager-ha-181800-m03" [9d247c9d-37a0-4880-8b0a-1134ebb963ab] Running
	I1018 17:50:10.483612   69488 system_pods.go:61] "kube-proxy-dpwpn" [dfabd129-fc36-4d16-ab0f-0b9ecc015712] Running
	I1018 17:50:10.483630   69488 system_pods.go:61] "kube-proxy-fj4ww" [40c5681f-ad11-4e21-a852-5601e2a9fa6e] Running
	I1018 17:50:10.483648   69488 system_pods.go:61] "kube-proxy-qsqmb" [9e100b31-50e5-4d86-a234-0d6277009e98] Running
	I1018 17:50:10.483673   69488 system_pods.go:61] "kube-proxy-stgvm" [15b89226-91ae-478f-acfe-7841776b1377] Running
	I1018 17:50:10.483697   69488 system_pods.go:61] "kube-scheduler-ha-181800" [f4699386-754c-4fa2-8556-174d872d6825] Running
	I1018 17:50:10.483716   69488 system_pods.go:61] "kube-scheduler-ha-181800-m02" [565d55c5-9541-4ef9-a036-3d9ff03f0fa9] Running
	I1018 17:50:10.483733   69488 system_pods.go:61] "kube-scheduler-ha-181800-m03" [4f8687e4-3dbc-4c98-97a4-ab703b016798] Running
	I1018 17:50:10.483751   69488 system_pods.go:61] "kube-vip-ha-181800" [a947f5a9-6257-4ff0-9f73-2d720974668b] Running
	I1018 17:50:10.483784   69488 system_pods.go:61] "kube-vip-ha-181800-m02" [21258022-efed-42fb-b206-89ffcd8d3820] Running
	I1018 17:50:10.483812   69488 system_pods.go:61] "kube-vip-ha-181800-m03" [0087f776-5d07-4c43-906d-c63afc2cc349] Running
	I1018 17:50:10.483830   69488 system_pods.go:61] "storage-provisioner" [3c6521cd-8e1b-46aa-96a3-39e475e1426c] Running
	I1018 17:50:10.483848   69488 system_pods.go:74] duration metric: took 19.763103ms to wait for pod list to return data ...
	I1018 17:50:10.483877   69488 default_sa.go:34] waiting for default service account to be created ...
	I1018 17:50:10.493513   69488 default_sa.go:45] found service account: "default"
	I1018 17:50:10.493594   69488 default_sa.go:55] duration metric: took 9.697323ms for default service account to be created ...
	I1018 17:50:10.493625   69488 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 17:50:10.501353   69488 system_pods.go:86] 26 kube-system pods found
	I1018 17:50:10.501452   69488 system_pods.go:89] "coredns-66bc5c9577-f6v2w" [a1fbdf00-9636-43a5-b1ed-a98bcacb5537] Running
	I1018 17:50:10.501476   69488 system_pods.go:89] "coredns-66bc5c9577-p7nbg" [9d361193-5b45-400e-8161-804fc30e7515] Running
	I1018 17:50:10.501494   69488 system_pods.go:89] "etcd-ha-181800" [3aafeb42-d09a-4b84-9739-e25adc3a4135] Running
	I1018 17:50:10.501514   69488 system_pods.go:89] "etcd-ha-181800-m02" [194d8d52-b9b6-43ae-8c1f-01b965d3ae96] Running
	I1018 17:50:10.501540   69488 system_pods.go:89] "etcd-ha-181800-m03" [f52cd0ee-6f99-49ba-8c4f-218b8d166fe2] Running
	I1018 17:50:10.501560   69488 system_pods.go:89] "kindnet-72mvm" [5edfc356-9d49-4895-b36a-06c2bd39155a] Running
	I1018 17:50:10.501578   69488 system_pods.go:89] "kindnet-86s8z" [6559ac9e-c73d-4d49-a0e1-87d630e5bec8] Running
	I1018 17:50:10.501595   69488 system_pods.go:89] "kindnet-88bv7" [3b3b9715-1e6e-4046-adae-f372381e068a] Running
	I1018 17:50:10.501612   69488 system_pods.go:89] "kindnet-9qbbw" [d1a305ed-4a0e-4ccc-90e0-04577ad4e5c4] Running
	I1018 17:50:10.501639   69488 system_pods.go:89] "kube-apiserver-ha-181800" [4966738e-d055-404d-82ad-0d3f23ef0337] Running
	I1018 17:50:10.501660   69488 system_pods.go:89] "kube-apiserver-ha-181800-m02" [344fc499-0c04-4f86-a919-3c2da1e7a1e6] Running
	I1018 17:50:10.501677   69488 system_pods.go:89] "kube-apiserver-ha-181800-m03" [ce72f944-adc2-46a9-a83c-dc75936c3e9c] Running
	I1018 17:50:10.501694   69488 system_pods.go:89] "kube-controller-manager-ha-181800" [9a4be61b-4ecc-46da-86a1-472b6da720b9] Running
	I1018 17:50:10.501711   69488 system_pods.go:89] "kube-controller-manager-ha-181800-m02" [6a519ce2-92dc-4003-8f1a-6d818fea6da3] Running
	I1018 17:50:10.501737   69488 system_pods.go:89] "kube-controller-manager-ha-181800-m03" [9d247c9d-37a0-4880-8b0a-1134ebb963ab] Running
	I1018 17:50:10.501756   69488 system_pods.go:89] "kube-proxy-dpwpn" [dfabd129-fc36-4d16-ab0f-0b9ecc015712] Running
	I1018 17:50:10.501776   69488 system_pods.go:89] "kube-proxy-fj4ww" [40c5681f-ad11-4e21-a852-5601e2a9fa6e] Running
	I1018 17:50:10.501793   69488 system_pods.go:89] "kube-proxy-qsqmb" [9e100b31-50e5-4d86-a234-0d6277009e98] Running
	I1018 17:50:10.501809   69488 system_pods.go:89] "kube-proxy-stgvm" [15b89226-91ae-478f-acfe-7841776b1377] Running
	I1018 17:50:10.501836   69488 system_pods.go:89] "kube-scheduler-ha-181800" [f4699386-754c-4fa2-8556-174d872d6825] Running
	I1018 17:50:10.501855   69488 system_pods.go:89] "kube-scheduler-ha-181800-m02" [565d55c5-9541-4ef9-a036-3d9ff03f0fa9] Running
	I1018 17:50:10.501872   69488 system_pods.go:89] "kube-scheduler-ha-181800-m03" [4f8687e4-3dbc-4c98-97a4-ab703b016798] Running
	I1018 17:50:10.501889   69488 system_pods.go:89] "kube-vip-ha-181800" [a947f5a9-6257-4ff0-9f73-2d720974668b] Running
	I1018 17:50:10.501906   69488 system_pods.go:89] "kube-vip-ha-181800-m02" [21258022-efed-42fb-b206-89ffcd8d3820] Running
	I1018 17:50:10.501923   69488 system_pods.go:89] "kube-vip-ha-181800-m03" [0087f776-5d07-4c43-906d-c63afc2cc349] Running
	I1018 17:50:10.501939   69488 system_pods.go:89] "storage-provisioner" [3c6521cd-8e1b-46aa-96a3-39e475e1426c] Running
	I1018 17:50:10.501958   69488 system_pods.go:126] duration metric: took 8.313403ms to wait for k8s-apps to be running ...
	I1018 17:50:10.501982   69488 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 17:50:10.502072   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 17:50:10.521995   69488 system_svc.go:56] duration metric: took 20.005468ms WaitForService to wait for kubelet
	I1018 17:50:10.522064   69488 kubeadm.go:586] duration metric: took 10.458238282s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:50:10.522097   69488 node_conditions.go:102] verifying NodePressure condition ...
	I1018 17:50:10.529801   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:10.529839   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:10.529851   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:10.529856   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:10.529860   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:10.529864   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:10.529868   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:10.529873   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:10.529878   69488 node_conditions.go:105] duration metric: took 7.761413ms to run NodePressure ...
	I1018 17:50:10.529893   69488 start.go:241] waiting for startup goroutines ...
	I1018 17:50:10.529919   69488 start.go:255] writing updated cluster config ...
	I1018 17:50:10.533578   69488 out.go:203] 
	I1018 17:50:10.536806   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:10.536948   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:50:10.540446   69488 out.go:179] * Starting "ha-181800-m03" control-plane node in "ha-181800" cluster
	I1018 17:50:10.544213   69488 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:50:10.547247   69488 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:50:10.550234   69488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:50:10.550276   69488 cache.go:58] Caching tarball of preloaded images
	I1018 17:50:10.550383   69488 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:50:10.550399   69488 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:50:10.550572   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:50:10.550792   69488 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:50:10.581920   69488 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:50:10.581944   69488 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:50:10.581957   69488 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:50:10.581981   69488 start.go:360] acquireMachinesLock for ha-181800-m03: {Name:mk3bd15228a4ef4b7c016e23b190ad29deb5e3c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:50:10.582039   69488 start.go:364] duration metric: took 38.023µs to acquireMachinesLock for "ha-181800-m03"
	I1018 17:50:10.582062   69488 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:50:10.582068   69488 fix.go:54] fixHost starting: m03
	I1018 17:50:10.582331   69488 cli_runner.go:164] Run: docker container inspect ha-181800-m03 --format={{.State.Status}}
	I1018 17:50:10.604865   69488 fix.go:112] recreateIfNeeded on ha-181800-m03: state=Stopped err=<nil>
	W1018 17:50:10.604890   69488 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:50:10.607957   69488 out.go:252] * Restarting existing docker container for "ha-181800-m03" ...
	I1018 17:50:10.608050   69488 cli_runner.go:164] Run: docker start ha-181800-m03
	I1018 17:50:10.899418   69488 cli_runner.go:164] Run: docker container inspect ha-181800-m03 --format={{.State.Status}}
	I1018 17:50:10.926262   69488 kic.go:430] container "ha-181800-m03" state is running.
	I1018 17:50:10.926628   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m03
	I1018 17:50:10.950821   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:50:10.951066   69488 machine.go:93] provisionDockerMachine start ...
	I1018 17:50:10.951120   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:10.976987   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:10.977281   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1018 17:50:10.977290   69488 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:50:10.978264   69488 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 17:50:14.380761   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m03
	
	I1018 17:50:14.380788   69488 ubuntu.go:182] provisioning hostname "ha-181800-m03"
	I1018 17:50:14.380865   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:14.409115   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:14.409426   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1018 17:50:14.409441   69488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181800-m03 && echo "ha-181800-m03" | sudo tee /etc/hostname
	I1018 17:50:14.717264   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m03
	
	I1018 17:50:14.717353   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:14.739028   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:14.739335   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1018 17:50:14.739352   69488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181800-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181800-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181800-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:50:14.965850   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:50:14.965903   69488 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:50:14.965931   69488 ubuntu.go:190] setting up certificates
	I1018 17:50:14.965940   69488 provision.go:84] configureAuth start
	I1018 17:50:14.966014   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m03
	I1018 17:50:15.001400   69488 provision.go:143] copyHostCerts
	I1018 17:50:15.001447   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:50:15.001479   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:50:15.001492   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:50:15.001591   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:50:15.001685   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:50:15.001709   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:50:15.001717   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:50:15.001745   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:50:15.001793   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:50:15.001814   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:50:15.001822   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:50:15.001846   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:50:15.001898   69488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.ha-181800-m03 san=[127.0.0.1 192.168.49.4 ha-181800-m03 localhost minikube]
	I1018 17:50:15.478787   69488 provision.go:177] copyRemoteCerts
	I1018 17:50:15.478855   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:50:15.478897   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:15.499352   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m03/id_rsa Username:docker}
	I1018 17:50:15.670546   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 17:50:15.670610   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:50:15.737652   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 17:50:15.737722   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 17:50:15.785672   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 17:50:15.785736   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:50:15.819920   69488 provision.go:87] duration metric: took 853.956632ms to configureAuth
	I1018 17:50:15.819958   69488 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:50:15.820214   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:15.820332   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:15.865677   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:15.866025   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1018 17:50:15.866041   69488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:50:16.412687   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:50:16.412751   69488 machine.go:96] duration metric: took 5.461676033s to provisionDockerMachine
	I1018 17:50:16.412774   69488 start.go:293] postStartSetup for "ha-181800-m03" (driver="docker")
	I1018 17:50:16.412799   69488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:50:16.412889   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:50:16.413002   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:16.433582   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m03/id_rsa Username:docker}
	I1018 17:50:16.541794   69488 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:50:16.545653   69488 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:50:16.545679   69488 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:50:16.545690   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:50:16.545754   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:50:16.545831   69488 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:50:16.545837   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 17:50:16.545942   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 17:50:16.558126   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:50:16.579067   69488 start.go:296] duration metric: took 166.265226ms for postStartSetup
	I1018 17:50:16.579147   69488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:50:16.579196   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:16.607003   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m03/id_rsa Username:docker}
	I1018 17:50:16.710563   69488 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:50:16.715811   69488 fix.go:56] duration metric: took 6.133736189s for fixHost
	I1018 17:50:16.715839   69488 start.go:83] releasing machines lock for "ha-181800-m03", held for 6.133787135s
	I1018 17:50:16.715904   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m03
	I1018 17:50:16.738713   69488 out.go:179] * Found network options:
	I1018 17:50:16.742042   69488 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1018 17:50:16.745211   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:16.745257   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:16.745281   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:16.745291   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	I1018 17:50:16.745360   69488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:50:16.745415   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:16.745719   69488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:50:16.745787   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:16.786710   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m03/id_rsa Username:docker}
	I1018 17:50:16.789091   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m03/id_rsa Username:docker}
	I1018 17:50:17.000059   69488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:50:17.007334   69488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:50:17.007407   69488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:50:17.020749   69488 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:50:17.020771   69488 start.go:495] detecting cgroup driver to use...
	I1018 17:50:17.020801   69488 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:50:17.020860   69488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:50:17.040018   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:50:17.058499   69488 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:50:17.058565   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:50:17.088757   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:50:17.114857   69488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:50:17.279680   69488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:50:17.689048   69488 docker.go:234] disabling docker service ...
	I1018 17:50:17.689168   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:50:17.768854   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:50:17.797881   69488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:50:18.156314   69488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:50:18.369568   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:50:18.394137   69488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:50:18.428969   69488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:50:18.429103   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.447576   69488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:50:18.447692   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.482845   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.510376   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.531315   69488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:50:18.548495   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.563525   69488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.581424   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.594509   69488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:50:18.609129   69488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:50:18.621435   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:18.879315   69488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:50:19.151219   69488 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:50:19.151291   69488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:50:19.155163   69488 start.go:563] Will wait 60s for crictl version
	I1018 17:50:19.155231   69488 ssh_runner.go:195] Run: which crictl
	I1018 17:50:19.159144   69488 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:50:19.185150   69488 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:50:19.185237   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:50:19.215107   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:50:19.252641   69488 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:50:19.255663   69488 out.go:179]   - env NO_PROXY=192.168.49.2
	I1018 17:50:19.258473   69488 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1018 17:50:19.261365   69488 cli_runner.go:164] Run: docker network inspect ha-181800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:50:19.278013   69488 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:50:19.282046   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:50:19.291553   69488 mustload.go:65] Loading cluster: ha-181800
	I1018 17:50:19.291792   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:19.292044   69488 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:50:19.308345   69488 host.go:66] Checking if "ha-181800" exists ...
	I1018 17:50:19.308613   69488 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800 for IP: 192.168.49.4
	I1018 17:50:19.308629   69488 certs.go:195] generating shared ca certs ...
	I1018 17:50:19.308644   69488 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:50:19.308750   69488 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:50:19.308801   69488 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:50:19.308811   69488 certs.go:257] generating profile certs ...
	I1018 17:50:19.308888   69488 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key
	I1018 17:50:19.308994   69488 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.35e78fdb
	I1018 17:50:19.309039   69488 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key
	I1018 17:50:19.309051   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 17:50:19.309064   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 17:50:19.309079   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 17:50:19.309093   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 17:50:19.309106   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 17:50:19.309121   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 17:50:19.309132   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 17:50:19.309147   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 17:50:19.309202   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:50:19.309233   69488 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:50:19.309246   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:50:19.309272   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:50:19.309298   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:50:19.309353   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:50:19.309405   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:50:19.309436   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 17:50:19.309452   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:19.309465   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 17:50:19.309518   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:50:19.326970   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:50:19.425285   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1018 17:50:19.430205   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1018 17:50:19.438544   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1018 17:50:19.442194   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1018 17:50:19.450335   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1018 17:50:19.454272   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1018 17:50:19.462534   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1018 17:50:19.466318   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1018 17:50:19.475475   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1018 17:50:19.479138   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1018 17:50:19.487039   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1018 17:50:19.492406   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1018 17:50:19.511212   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:50:19.558261   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:50:19.590631   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:50:19.618816   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:50:19.644073   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 17:50:19.666879   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 17:50:19.688513   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 17:50:19.707989   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 17:50:19.736170   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:50:19.759883   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:50:19.781940   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:50:19.806805   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1018 17:50:19.820301   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1018 17:50:19.837237   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1018 17:50:19.852161   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1018 17:50:19.865774   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1018 17:50:19.879759   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1018 17:50:19.893543   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1018 17:50:19.907773   69488 ssh_runner.go:195] Run: openssl version
	I1018 17:50:19.914031   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:50:19.923464   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:50:19.928100   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:50:19.928198   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:50:19.970114   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:50:19.978890   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:50:19.987235   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:50:19.991041   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:50:19.991160   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:50:20.033052   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:50:20.042399   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:50:20.051218   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:20.055291   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:20.055383   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:20.097864   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:50:20.106870   69488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:50:20.111573   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 17:50:20.153811   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 17:50:20.195276   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 17:50:20.242865   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 17:50:20.284917   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 17:50:20.327528   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 17:50:20.380629   69488 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1018 17:50:20.380764   69488 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-181800-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:50:20.380810   69488 kube-vip.go:115] generating kube-vip config ...
	I1018 17:50:20.380884   69488 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 17:50:20.394557   69488 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:50:20.394614   69488 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 17:50:20.394671   69488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:50:20.404177   69488 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:50:20.404302   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1018 17:50:20.412251   69488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 17:50:20.425311   69488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:50:20.441214   69488 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 17:50:20.463677   69488 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 17:50:20.468015   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:50:20.478500   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:20.642164   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:50:20.673908   69488 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:50:20.674213   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:20.679253   69488 out.go:179] * Verifying Kubernetes components...
	I1018 17:50:20.682245   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:20.839086   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:50:20.854027   69488 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1018 17:50:20.854101   69488 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1018 17:50:20.854335   69488 node_ready.go:35] waiting up to 6m0s for node "ha-181800-m03" to be "Ready" ...
	W1018 17:50:22.857724   69488 node_ready.go:57] node "ha-181800-m03" has "Ready":"Unknown" status (will retry)
	W1018 17:50:24.858447   69488 node_ready.go:57] node "ha-181800-m03" has "Ready":"Unknown" status (will retry)
	W1018 17:50:26.858609   69488 node_ready.go:57] node "ha-181800-m03" has "Ready":"Unknown" status (will retry)
	W1018 17:50:29.359403   69488 node_ready.go:57] node "ha-181800-m03" has "Ready":"Unknown" status (will retry)
	W1018 17:50:31.859188   69488 node_ready.go:57] node "ha-181800-m03" has "Ready":"Unknown" status (will retry)
	W1018 17:50:34.358228   69488 node_ready.go:57] node "ha-181800-m03" has "Ready":"Unknown" status (will retry)
	I1018 17:50:34.857876   69488 node_ready.go:49] node "ha-181800-m03" is "Ready"
	I1018 17:50:34.857902   69488 node_ready.go:38] duration metric: took 14.003549338s for node "ha-181800-m03" to be "Ready" ...
	I1018 17:50:34.857914   69488 api_server.go:52] waiting for apiserver process to appear ...
	I1018 17:50:34.857973   69488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:50:34.869120   69488 api_server.go:72] duration metric: took 14.194796326s to wait for apiserver process to appear ...
	I1018 17:50:34.869149   69488 api_server.go:88] waiting for apiserver healthz status ...
	I1018 17:50:34.869170   69488 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 17:50:34.878933   69488 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 17:50:34.879871   69488 api_server.go:141] control plane version: v1.34.1
	I1018 17:50:34.879896   69488 api_server.go:131] duration metric: took 10.739864ms to wait for apiserver health ...
	I1018 17:50:34.879915   69488 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 17:50:34.886492   69488 system_pods.go:59] 26 kube-system pods found
	I1018 17:50:34.886536   69488 system_pods.go:61] "coredns-66bc5c9577-f6v2w" [a1fbdf00-9636-43a5-b1ed-a98bcacb5537] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:50:34.886578   69488 system_pods.go:61] "coredns-66bc5c9577-p7nbg" [9d361193-5b45-400e-8161-804fc30e7515] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:50:34.886593   69488 system_pods.go:61] "etcd-ha-181800" [3aafeb42-d09a-4b84-9739-e25adc3a4135] Running
	I1018 17:50:34.886598   69488 system_pods.go:61] "etcd-ha-181800-m02" [194d8d52-b9b6-43ae-8c1f-01b965d3ae96] Running
	I1018 17:50:34.886603   69488 system_pods.go:61] "etcd-ha-181800-m03" [f52cd0ee-6f99-49ba-8c4f-218b8d166fe2] Running
	I1018 17:50:34.886607   69488 system_pods.go:61] "kindnet-72mvm" [5edfc356-9d49-4895-b36a-06c2bd39155a] Running
	I1018 17:50:34.886622   69488 system_pods.go:61] "kindnet-86s8z" [6559ac9e-c73d-4d49-a0e1-87d630e5bec8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 17:50:34.886629   69488 system_pods.go:61] "kindnet-88bv7" [3b3b9715-1e6e-4046-adae-f372381e068a] Running
	I1018 17:50:34.886642   69488 system_pods.go:61] "kindnet-9qbbw" [d1a305ed-4a0e-4ccc-90e0-04577ad4e5c4] Running
	I1018 17:50:34.886646   69488 system_pods.go:61] "kube-apiserver-ha-181800" [4966738e-d055-404d-82ad-0d3f23ef0337] Running
	I1018 17:50:34.886650   69488 system_pods.go:61] "kube-apiserver-ha-181800-m02" [344fc499-0c04-4f86-a919-3c2da1e7a1e6] Running
	I1018 17:50:34.886654   69488 system_pods.go:61] "kube-apiserver-ha-181800-m03" [ce72f944-adc2-46a9-a83c-dc75936c3e9c] Running
	I1018 17:50:34.886659   69488 system_pods.go:61] "kube-controller-manager-ha-181800" [9a4be61b-4ecc-46da-86a1-472b6da720b9] Running
	I1018 17:50:34.886672   69488 system_pods.go:61] "kube-controller-manager-ha-181800-m02" [6a519ce2-92dc-4003-8f1a-6d818fea6da3] Running
	I1018 17:50:34.886679   69488 system_pods.go:61] "kube-controller-manager-ha-181800-m03" [9d247c9d-37a0-4880-8b0a-1134ebb963ab] Running
	I1018 17:50:34.886685   69488 system_pods.go:61] "kube-proxy-dpwpn" [dfabd129-fc36-4d16-ab0f-0b9ecc015712] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 17:50:34.886699   69488 system_pods.go:61] "kube-proxy-fj4ww" [40c5681f-ad11-4e21-a852-5601e2a9fa6e] Running
	I1018 17:50:34.886703   69488 system_pods.go:61] "kube-proxy-qsqmb" [9e100b31-50e5-4d86-a234-0d6277009e98] Running
	I1018 17:50:34.886707   69488 system_pods.go:61] "kube-proxy-stgvm" [15b89226-91ae-478f-acfe-7841776b1377] Running
	I1018 17:50:34.886714   69488 system_pods.go:61] "kube-scheduler-ha-181800" [f4699386-754c-4fa2-8556-174d872d6825] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 17:50:34.886723   69488 system_pods.go:61] "kube-scheduler-ha-181800-m02" [565d55c5-9541-4ef9-a036-3d9ff03f0fa9] Running
	I1018 17:50:34.886727   69488 system_pods.go:61] "kube-scheduler-ha-181800-m03" [4f8687e4-3dbc-4c98-97a4-ab703b016798] Running
	I1018 17:50:34.886732   69488 system_pods.go:61] "kube-vip-ha-181800" [a947f5a9-6257-4ff0-9f73-2d720974668b] Running
	I1018 17:50:34.886739   69488 system_pods.go:61] "kube-vip-ha-181800-m02" [21258022-efed-42fb-b206-89ffcd8d3820] Running
	I1018 17:50:34.886743   69488 system_pods.go:61] "kube-vip-ha-181800-m03" [0087f776-5d07-4c43-906d-c63afc2cc349] Running
	I1018 17:50:34.886747   69488 system_pods.go:61] "storage-provisioner" [3c6521cd-8e1b-46aa-96a3-39e475e1426c] Running
	I1018 17:50:34.886753   69488 system_pods.go:74] duration metric: took 6.831276ms to wait for pod list to return data ...
	I1018 17:50:34.886767   69488 default_sa.go:34] waiting for default service account to be created ...
	I1018 17:50:34.890059   69488 default_sa.go:45] found service account: "default"
	I1018 17:50:34.890090   69488 default_sa.go:55] duration metric: took 3.316408ms for default service account to be created ...
	I1018 17:50:34.890099   69488 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 17:50:34.899064   69488 system_pods.go:86] 26 kube-system pods found
	I1018 17:50:34.899114   69488 system_pods.go:89] "coredns-66bc5c9577-f6v2w" [a1fbdf00-9636-43a5-b1ed-a98bcacb5537] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:50:34.899126   69488 system_pods.go:89] "coredns-66bc5c9577-p7nbg" [9d361193-5b45-400e-8161-804fc30e7515] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:50:34.899135   69488 system_pods.go:89] "etcd-ha-181800" [3aafeb42-d09a-4b84-9739-e25adc3a4135] Running
	I1018 17:50:34.899145   69488 system_pods.go:89] "etcd-ha-181800-m02" [194d8d52-b9b6-43ae-8c1f-01b965d3ae96] Running
	I1018 17:50:34.899154   69488 system_pods.go:89] "etcd-ha-181800-m03" [f52cd0ee-6f99-49ba-8c4f-218b8d166fe2] Running
	I1018 17:50:34.899159   69488 system_pods.go:89] "kindnet-72mvm" [5edfc356-9d49-4895-b36a-06c2bd39155a] Running
	I1018 17:50:34.899172   69488 system_pods.go:89] "kindnet-86s8z" [6559ac9e-c73d-4d49-a0e1-87d630e5bec8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 17:50:34.899182   69488 system_pods.go:89] "kindnet-88bv7" [3b3b9715-1e6e-4046-adae-f372381e068a] Running
	I1018 17:50:34.899196   69488 system_pods.go:89] "kindnet-9qbbw" [d1a305ed-4a0e-4ccc-90e0-04577ad4e5c4] Running
	I1018 17:50:34.899202   69488 system_pods.go:89] "kube-apiserver-ha-181800" [4966738e-d055-404d-82ad-0d3f23ef0337] Running
	I1018 17:50:34.899213   69488 system_pods.go:89] "kube-apiserver-ha-181800-m02" [344fc499-0c04-4f86-a919-3c2da1e7a1e6] Running
	I1018 17:50:34.899223   69488 system_pods.go:89] "kube-apiserver-ha-181800-m03" [ce72f944-adc2-46a9-a83c-dc75936c3e9c] Running
	I1018 17:50:34.899228   69488 system_pods.go:89] "kube-controller-manager-ha-181800" [9a4be61b-4ecc-46da-86a1-472b6da720b9] Running
	I1018 17:50:34.899243   69488 system_pods.go:89] "kube-controller-manager-ha-181800-m02" [6a519ce2-92dc-4003-8f1a-6d818fea6da3] Running
	I1018 17:50:34.899249   69488 system_pods.go:89] "kube-controller-manager-ha-181800-m03" [9d247c9d-37a0-4880-8b0a-1134ebb963ab] Running
	I1018 17:50:34.899260   69488 system_pods.go:89] "kube-proxy-dpwpn" [dfabd129-fc36-4d16-ab0f-0b9ecc015712] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 17:50:34.899271   69488 system_pods.go:89] "kube-proxy-fj4ww" [40c5681f-ad11-4e21-a852-5601e2a9fa6e] Running
	I1018 17:50:34.899276   69488 system_pods.go:89] "kube-proxy-qsqmb" [9e100b31-50e5-4d86-a234-0d6277009e98] Running
	I1018 17:50:34.899281   69488 system_pods.go:89] "kube-proxy-stgvm" [15b89226-91ae-478f-acfe-7841776b1377] Running
	I1018 17:50:34.899294   69488 system_pods.go:89] "kube-scheduler-ha-181800" [f4699386-754c-4fa2-8556-174d872d6825] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 17:50:34.899303   69488 system_pods.go:89] "kube-scheduler-ha-181800-m02" [565d55c5-9541-4ef9-a036-3d9ff03f0fa9] Running
	I1018 17:50:34.899308   69488 system_pods.go:89] "kube-scheduler-ha-181800-m03" [4f8687e4-3dbc-4c98-97a4-ab703b016798] Running
	I1018 17:50:34.899312   69488 system_pods.go:89] "kube-vip-ha-181800" [a947f5a9-6257-4ff0-9f73-2d720974668b] Running
	I1018 17:50:34.899323   69488 system_pods.go:89] "kube-vip-ha-181800-m02" [21258022-efed-42fb-b206-89ffcd8d3820] Running
	I1018 17:50:34.899327   69488 system_pods.go:89] "kube-vip-ha-181800-m03" [0087f776-5d07-4c43-906d-c63afc2cc349] Running
	I1018 17:50:34.899331   69488 system_pods.go:89] "storage-provisioner" [3c6521cd-8e1b-46aa-96a3-39e475e1426c] Running
	I1018 17:50:34.899338   69488 system_pods.go:126] duration metric: took 9.233497ms to wait for k8s-apps to be running ...
	I1018 17:50:34.899350   69488 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 17:50:34.899417   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 17:50:34.917250   69488 system_svc.go:56] duration metric: took 17.889347ms WaitForService to wait for kubelet
	I1018 17:50:34.917280   69488 kubeadm.go:586] duration metric: took 14.242961018s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:50:34.917312   69488 node_conditions.go:102] verifying NodePressure condition ...
	I1018 17:50:34.921584   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:34.921618   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:34.921629   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:34.921635   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:34.921640   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:34.921644   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:34.921648   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:34.921652   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:34.921657   69488 node_conditions.go:105] duration metric: took 4.33997ms to run NodePressure ...
	I1018 17:50:34.921672   69488 start.go:241] waiting for startup goroutines ...
	I1018 17:50:34.921695   69488 start.go:255] writing updated cluster config ...
	I1018 17:50:34.925146   69488 out.go:203] 
	I1018 17:50:34.928178   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:34.928377   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:50:34.931719   69488 out.go:179] * Starting "ha-181800-m04" worker node in "ha-181800" cluster
	I1018 17:50:34.934625   69488 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:50:34.937723   69488 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:50:34.940621   69488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:50:34.940656   69488 cache.go:58] Caching tarball of preloaded images
	I1018 17:50:34.940709   69488 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:50:34.940775   69488 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:50:34.940787   69488 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:50:34.940923   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:50:34.962521   69488 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:50:34.962544   69488 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:50:34.962563   69488 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:50:34.962587   69488 start.go:360] acquireMachinesLock for ha-181800-m04: {Name:mkde4f18de8924439f6b0cc4435fbaf784c3faa2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:50:34.962654   69488 start.go:364] duration metric: took 47.016µs to acquireMachinesLock for "ha-181800-m04"
	I1018 17:50:34.962676   69488 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:50:34.962691   69488 fix.go:54] fixHost starting: m04
	I1018 17:50:34.962948   69488 cli_runner.go:164] Run: docker container inspect ha-181800-m04 --format={{.State.Status}}
	I1018 17:50:34.980810   69488 fix.go:112] recreateIfNeeded on ha-181800-m04: state=Stopped err=<nil>
	W1018 17:50:34.980838   69488 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:50:34.984164   69488 out.go:252] * Restarting existing docker container for "ha-181800-m04" ...
	I1018 17:50:34.984251   69488 cli_runner.go:164] Run: docker start ha-181800-m04
	I1018 17:50:35.315737   69488 cli_runner.go:164] Run: docker container inspect ha-181800-m04 --format={{.State.Status}}
	I1018 17:50:35.337160   69488 kic.go:430] container "ha-181800-m04" state is running.
	I1018 17:50:35.337590   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m04
	I1018 17:50:35.363433   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:50:35.363682   69488 machine.go:93] provisionDockerMachine start ...
	I1018 17:50:35.363737   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:35.394986   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:35.395304   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1018 17:50:35.395315   69488 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:50:35.396115   69488 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 17:50:38.582281   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m04
	
	I1018 17:50:38.582366   69488 ubuntu.go:182] provisioning hostname "ha-181800-m04"
	I1018 17:50:38.582470   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:38.612842   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:38.613162   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1018 17:50:38.613175   69488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181800-m04 && echo "ha-181800-m04" | sudo tee /etc/hostname
	I1018 17:50:38.824220   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m04
	
	I1018 17:50:38.824341   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:38.867678   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:38.867969   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1018 17:50:38.867985   69488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181800-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181800-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181800-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:50:39.054604   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:50:39.054689   69488 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:50:39.054718   69488 ubuntu.go:190] setting up certificates
	I1018 17:50:39.054753   69488 provision.go:84] configureAuth start
	I1018 17:50:39.054834   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m04
	I1018 17:50:39.086058   69488 provision.go:143] copyHostCerts
	I1018 17:50:39.086092   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:50:39.086123   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:50:39.086130   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:50:39.086205   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:50:39.086277   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:50:39.086294   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:50:39.086298   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:50:39.086323   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:50:39.086360   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:50:39.086376   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:50:39.086380   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:50:39.086403   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:50:39.086448   69488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.ha-181800-m04 san=[127.0.0.1 192.168.49.5 ha-181800-m04 localhost minikube]
	I1018 17:50:39.468879   69488 provision.go:177] copyRemoteCerts
	I1018 17:50:39.469042   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:50:39.469105   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:39.488386   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m04/id_rsa Username:docker}
	I1018 17:50:39.624142   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 17:50:39.624201   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:50:39.661469   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 17:50:39.661533   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 17:50:39.687551   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 17:50:39.687610   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:50:39.714808   69488 provision.go:87] duration metric: took 660.019137ms to configureAuth
	I1018 17:50:39.714833   69488 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:50:39.715059   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:39.715179   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:39.744352   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:39.744665   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1018 17:50:39.744680   69488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:50:40.169343   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:50:40.169451   69488 machine.go:96] duration metric: took 4.805759657s to provisionDockerMachine
	I1018 17:50:40.169476   69488 start.go:293] postStartSetup for "ha-181800-m04" (driver="docker")
	I1018 17:50:40.169509   69488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:50:40.169593   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:50:40.169660   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:40.199327   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m04/id_rsa Username:docker}
	I1018 17:50:40.309268   69488 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:50:40.313860   69488 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:50:40.313893   69488 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:50:40.313903   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:50:40.313963   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:50:40.314046   69488 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:50:40.314057   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 17:50:40.314164   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 17:50:40.322086   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:50:40.345649   69488 start.go:296] duration metric: took 176.137258ms for postStartSetup
	I1018 17:50:40.345726   69488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:50:40.345765   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:40.367346   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m04/id_rsa Username:docker}
	I1018 17:50:40.476066   69488 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:50:40.481571   69488 fix.go:56] duration metric: took 5.518874256s for fixHost
	I1018 17:50:40.481594   69488 start.go:83] releasing machines lock for "ha-181800-m04", held for 5.518929354s
	I1018 17:50:40.481667   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m04
	I1018 17:50:40.518678   69488 out.go:179] * Found network options:
	I1018 17:50:40.522829   69488 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1018 17:50:40.526545   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:40.526576   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:40.526587   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:40.526609   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:40.526619   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:40.526628   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	I1018 17:50:40.526702   69488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:50:40.526739   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:40.526991   69488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:50:40.527047   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:40.564877   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m04/id_rsa Username:docker}
	I1018 17:50:40.572778   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m04/id_rsa Username:docker}
	I1018 17:50:40.812088   69488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:50:40.818560   69488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:50:40.818643   69488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:50:40.827770   69488 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:50:40.827794   69488 start.go:495] detecting cgroup driver to use...
	I1018 17:50:40.827830   69488 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:50:40.827881   69488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:50:40.844762   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:50:40.859855   69488 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:50:40.859920   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:50:40.877123   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:50:40.901442   69488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:50:41.039508   69488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:50:41.185848   69488 docker.go:234] disabling docker service ...
	I1018 17:50:41.185936   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:50:41.204077   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:50:41.219382   69488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:50:41.421847   69488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:50:41.682651   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:50:41.704546   69488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:50:41.722306   69488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:50:41.722376   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.737444   69488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:50:41.737564   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.753240   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.765254   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.778891   69488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:50:41.788840   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.799676   69488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.810022   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.820591   69488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:50:41.828788   69488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:50:41.838483   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:41.972124   69488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:50:42.178891   69488 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:50:42.178980   69488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:50:42.184242   69488 start.go:563] Will wait 60s for crictl version
	I1018 17:50:42.184331   69488 ssh_runner.go:195] Run: which crictl
	I1018 17:50:42.191980   69488 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:50:42.224462   69488 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:50:42.224630   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:50:42.261636   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:50:42.307376   69488 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:50:42.310676   69488 out.go:179]   - env NO_PROXY=192.168.49.2
	I1018 17:50:42.313598   69488 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1018 17:50:42.316600   69488 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1018 17:50:42.319690   69488 cli_runner.go:164] Run: docker network inspect ha-181800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:50:42.337639   69488 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:50:42.341794   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:50:42.354387   69488 mustload.go:65] Loading cluster: ha-181800
	I1018 17:50:42.354632   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:42.354880   69488 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:50:42.375574   69488 host.go:66] Checking if "ha-181800" exists ...
	I1018 17:50:42.375851   69488 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800 for IP: 192.168.49.5
	I1018 17:50:42.375865   69488 certs.go:195] generating shared ca certs ...
	I1018 17:50:42.375878   69488 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:50:42.375994   69488 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:50:42.376039   69488 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:50:42.376053   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 17:50:42.376065   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 17:50:42.376082   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 17:50:42.376099   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 17:50:42.376158   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:50:42.376191   69488 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:50:42.376202   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:50:42.376227   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:50:42.376253   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:50:42.376280   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:50:42.376328   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:50:42.376359   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:42.376376   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 17:50:42.376390   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 17:50:42.376442   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:50:42.395447   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:50:42.416556   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:50:42.438126   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:50:42.461131   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:50:42.491460   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:50:42.516977   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:50:42.546320   69488 ssh_runner.go:195] Run: openssl version
	I1018 17:50:42.554579   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:50:42.566626   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:42.570900   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:42.570969   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:42.623862   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:50:42.634866   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:50:42.645108   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:50:42.655323   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:50:42.655394   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:50:42.704646   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:50:42.713644   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:50:42.722573   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:50:42.726769   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:50:42.726843   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:50:42.784245   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:50:42.792405   69488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:50:42.803513   69488 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 17:50:42.803579   69488 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.34.1 crio false true} ...
	I1018 17:50:42.803680   69488 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-181800-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:50:42.803759   69488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:50:42.812894   69488 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:50:42.813002   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1018 17:50:42.821266   69488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 17:50:42.839760   69488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:50:42.859184   69488 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 17:50:42.864035   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:50:42.875123   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:43.006572   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:50:43.022917   69488 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1018 17:50:43.023313   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:43.026393   69488 out.go:179] * Verifying Kubernetes components...
	I1018 17:50:43.029360   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:43.176018   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:50:43.195799   69488 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1018 17:50:43.195926   69488 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1018 17:50:43.196200   69488 node_ready.go:35] waiting up to 6m0s for node "ha-181800-m04" to be "Ready" ...
	W1018 17:50:45.201538   69488 node_ready.go:57] node "ha-181800-m04" has "Ready":"Unknown" status (will retry)
	W1018 17:50:47.702556   69488 node_ready.go:57] node "ha-181800-m04" has "Ready":"Unknown" status (will retry)
	W1018 17:50:50.201440   69488 node_ready.go:57] node "ha-181800-m04" has "Ready":"Unknown" status (will retry)
	I1018 17:50:50.700371   69488 node_ready.go:49] node "ha-181800-m04" is "Ready"
	I1018 17:50:50.700396   69488 node_ready.go:38] duration metric: took 7.50415906s for node "ha-181800-m04" to be "Ready" ...
	I1018 17:50:50.700408   69488 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 17:50:50.700467   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 17:50:50.718400   69488 system_svc.go:56] duration metric: took 17.984135ms WaitForService to wait for kubelet
	I1018 17:50:50.718432   69488 kubeadm.go:586] duration metric: took 7.695467215s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:50:50.718449   69488 node_conditions.go:102] verifying NodePressure condition ...
	I1018 17:50:50.722731   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:50.722761   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:50.722774   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:50.722779   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:50.722783   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:50.722787   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:50.722791   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:50.722795   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:50.722799   69488 node_conditions.go:105] duration metric: took 4.345599ms to run NodePressure ...
	I1018 17:50:50.722811   69488 start.go:241] waiting for startup goroutines ...
	I1018 17:50:50.722837   69488 start.go:255] writing updated cluster config ...
	I1018 17:50:50.723159   69488 ssh_runner.go:195] Run: rm -f paused
	I1018 17:50:50.727229   69488 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 17:50:50.727747   69488 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 17:50:50.750070   69488 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-f6v2w" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 17:50:52.756554   69488 pod_ready.go:104] pod "coredns-66bc5c9577-f6v2w" is not "Ready", error: <nil>
	W1018 17:50:54.757224   69488 pod_ready.go:104] pod "coredns-66bc5c9577-f6v2w" is not "Ready", error: <nil>
	I1018 17:50:55.872324   69488 pod_ready.go:94] pod "coredns-66bc5c9577-f6v2w" is "Ready"
	I1018 17:50:55.872348   69488 pod_ready.go:86] duration metric: took 5.122247372s for pod "coredns-66bc5c9577-f6v2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.872359   69488 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p7nbg" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.891895   69488 pod_ready.go:94] pod "coredns-66bc5c9577-p7nbg" is "Ready"
	I1018 17:50:55.891959   69488 pod_ready.go:86] duration metric: took 19.593189ms for pod "coredns-66bc5c9577-p7nbg" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.900138   69488 pod_ready.go:83] waiting for pod "etcd-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.913638   69488 pod_ready.go:94] pod "etcd-ha-181800" is "Ready"
	I1018 17:50:55.913660   69488 pod_ready.go:86] duration metric: took 13.499842ms for pod "etcd-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.913670   69488 pod_ready.go:83] waiting for pod "etcd-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.920519   69488 pod_ready.go:94] pod "etcd-ha-181800-m02" is "Ready"
	I1018 17:50:55.920596   69488 pod_ready.go:86] duration metric: took 6.91899ms for pod "etcd-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.920619   69488 pod_ready.go:83] waiting for pod "etcd-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.954930   69488 pod_ready.go:94] pod "etcd-ha-181800-m03" is "Ready"
	I1018 17:50:55.955010   69488 pod_ready.go:86] duration metric: took 34.368453ms for pod "etcd-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:56.150428   69488 request.go:683] "Waited before sending request" delay="195.256268ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1018 17:50:56.154502   69488 pod_ready.go:83] waiting for pod "kube-apiserver-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:56.350745   69488 request.go:683] "Waited before sending request" delay="196.132391ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181800"
	I1018 17:50:56.551187   69488 request.go:683] "Waited before sending request" delay="197.298856ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800"
	I1018 17:50:56.554146   69488 pod_ready.go:94] pod "kube-apiserver-ha-181800" is "Ready"
	I1018 17:50:56.554177   69488 pod_ready.go:86] duration metric: took 399.650322ms for pod "kube-apiserver-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:56.554188   69488 pod_ready.go:83] waiting for pod "kube-apiserver-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:56.750528   69488 request.go:683] "Waited before sending request" delay="196.269246ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181800-m02"
	I1018 17:50:56.951191   69488 request.go:683] "Waited before sending request" delay="191.312029ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m02"
	I1018 17:50:56.954528   69488 pod_ready.go:94] pod "kube-apiserver-ha-181800-m02" is "Ready"
	I1018 17:50:56.954555   69488 pod_ready.go:86] duration metric: took 400.360633ms for pod "kube-apiserver-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:56.954567   69488 pod_ready.go:83] waiting for pod "kube-apiserver-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:57.150777   69488 request.go:683] "Waited before sending request" delay="196.132408ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181800-m03"
	I1018 17:50:57.350632   69488 request.go:683] "Waited before sending request" delay="196.3256ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m03"
	I1018 17:50:57.354249   69488 pod_ready.go:94] pod "kube-apiserver-ha-181800-m03" is "Ready"
	I1018 17:50:57.354277   69488 pod_ready.go:86] duration metric: took 399.70318ms for pod "kube-apiserver-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:57.550692   69488 request.go:683] "Waited before sending request" delay="196.326346ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1018 17:50:57.554682   69488 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:57.750932   69488 request.go:683] "Waited before sending request" delay="196.156235ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181800"
	I1018 17:50:57.951083   69488 request.go:683] "Waited before sending request" delay="179.305539ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800"
	I1018 17:50:57.954373   69488 pod_ready.go:94] pod "kube-controller-manager-ha-181800" is "Ready"
	I1018 17:50:57.954402   69488 pod_ready.go:86] duration metric: took 399.688608ms for pod "kube-controller-manager-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:57.954412   69488 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:58.150687   69488 request.go:683] "Waited before sending request" delay="196.203982ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181800-m02"
	I1018 17:50:58.351259   69488 request.go:683] "Waited before sending request" delay="197.229423ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m02"
	I1018 17:50:58.354427   69488 pod_ready.go:94] pod "kube-controller-manager-ha-181800-m02" is "Ready"
	I1018 17:50:58.354451   69488 pod_ready.go:86] duration metric: took 400.032752ms for pod "kube-controller-manager-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:58.354461   69488 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:58.550867   69488 request.go:683] "Waited before sending request" delay="196.323713ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181800-m03"
	I1018 17:50:58.751164   69488 request.go:683] "Waited before sending request" delay="196.337531ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m03"
	I1018 17:50:58.754290   69488 pod_ready.go:94] pod "kube-controller-manager-ha-181800-m03" is "Ready"
	I1018 17:50:58.754318   69488 pod_ready.go:86] duration metric: took 399.850398ms for pod "kube-controller-manager-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:58.950697   69488 request.go:683] "Waited before sending request" delay="196.290137ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1018 17:50:58.954553   69488 pod_ready.go:83] waiting for pod "kube-proxy-dpwpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:59.150998   69488 request.go:683] "Waited before sending request" delay="196.346368ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dpwpn"
	I1018 17:50:59.350617   69488 request.go:683] "Waited before sending request" delay="195.289755ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m02"
	I1018 17:50:59.353848   69488 pod_ready.go:94] pod "kube-proxy-dpwpn" is "Ready"
	I1018 17:50:59.353878   69488 pod_ready.go:86] duration metric: took 399.293025ms for pod "kube-proxy-dpwpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:59.353888   69488 pod_ready.go:83] waiting for pod "kube-proxy-fj4ww" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:59.550367   69488 request.go:683] "Waited before sending request" delay="196.374503ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fj4ww"
	I1018 17:50:59.751156   69488 request.go:683] "Waited before sending request" delay="197.148429ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m04"
	I1018 17:50:59.754407   69488 pod_ready.go:94] pod "kube-proxy-fj4ww" is "Ready"
	I1018 17:50:59.754437   69488 pod_ready.go:86] duration metric: took 400.541386ms for pod "kube-proxy-fj4ww" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:59.754446   69488 pod_ready.go:83] waiting for pod "kube-proxy-qsqmb" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:59.950755   69488 request.go:683] "Waited before sending request" delay="196.237656ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qsqmb"
	I1018 17:51:00.158458   69488 request.go:683] "Waited before sending request" delay="204.154018ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m03"
	I1018 17:51:00.170490   69488 pod_ready.go:94] pod "kube-proxy-qsqmb" is "Ready"
	I1018 17:51:00.170526   69488 pod_ready.go:86] duration metric: took 416.072575ms for pod "kube-proxy-qsqmb" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:00.170537   69488 pod_ready.go:83] waiting for pod "kube-proxy-stgvm" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:00.350837   69488 request.go:683] "Waited before sending request" delay="180.202158ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-stgvm"
	I1018 17:51:00.550600   69488 request.go:683] "Waited before sending request" delay="195.396062ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800"
	I1018 17:51:00.553989   69488 pod_ready.go:94] pod "kube-proxy-stgvm" is "Ready"
	I1018 17:51:00.554026   69488 pod_ready.go:86] duration metric: took 383.481925ms for pod "kube-proxy-stgvm" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:00.750322   69488 request.go:683] "Waited before sending request" delay="196.164105ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1018 17:51:00.754581   69488 pod_ready.go:83] waiting for pod "kube-scheduler-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:00.951090   69488 request.go:683] "Waited before sending request" delay="196.343135ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181800"
	I1018 17:51:01.151207   69488 request.go:683] "Waited before sending request" delay="196.368472ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800"
	I1018 17:51:01.154780   69488 pod_ready.go:94] pod "kube-scheduler-ha-181800" is "Ready"
	I1018 17:51:01.154809   69488 pod_ready.go:86] duration metric: took 400.156865ms for pod "kube-scheduler-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:01.154820   69488 pod_ready.go:83] waiting for pod "kube-scheduler-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:01.351014   69488 request.go:683] "Waited before sending request" delay="196.125229ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181800-m02"
	I1018 17:51:01.550334   69488 request.go:683] "Waited before sending request" delay="195.254374ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m02"
	I1018 17:51:01.553462   69488 pod_ready.go:94] pod "kube-scheduler-ha-181800-m02" is "Ready"
	I1018 17:51:01.553533   69488 pod_ready.go:86] duration metric: took 398.706213ms for pod "kube-scheduler-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:01.553558   69488 pod_ready.go:83] waiting for pod "kube-scheduler-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:01.750793   69488 request.go:683] "Waited before sending request" delay="197.139116ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181800-m03"
	I1018 17:51:01.951100   69488 request.go:683] "Waited before sending request" delay="196.302232ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m03"
	I1018 17:51:01.954435   69488 pod_ready.go:94] pod "kube-scheduler-ha-181800-m03" is "Ready"
	I1018 17:51:01.954463   69488 pod_ready.go:86] duration metric: took 400.885736ms for pod "kube-scheduler-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:01.954476   69488 pod_ready.go:40] duration metric: took 11.227212191s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 17:51:02.019798   69488 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 17:51:02.023234   69488 out.go:179] * Done! kubectl is now configured to use "ha-181800" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.572124206Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3818bf02-e1ec-45e5-8db2-98e9f6e8000a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.573451845Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=bdb883a0-d1f7-44fb-bec3-c90a1d2ecb55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.573727681Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.584989537Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.585193183Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/87a35d3c6fccfe095ac3771dcbde81fc5df65bc9200469d9386fd64ba3708913/merged/etc/passwd: no such file or directory"
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.585221163Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/87a35d3c6fccfe095ac3771dcbde81fc5df65bc9200469d9386fd64ba3708913/merged/etc/group: no such file or directory"
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.585494192Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.609702849Z" level=info msg="Created container 3955a976d16cdd5db102930c28bfc2c48f3fd22d0d8f4186e30edecd860f23fd: kube-system/storage-provisioner/storage-provisioner" id=bdb883a0-d1f7-44fb-bec3-c90a1d2ecb55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.610857892Z" level=info msg="Starting container: 3955a976d16cdd5db102930c28bfc2c48f3fd22d0d8f4186e30edecd860f23fd" id=4f969c9f-8845-4412-b24f-e780eb6068e8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.615041848Z" level=info msg="Started container" PID=1488 containerID=3955a976d16cdd5db102930c28bfc2c48f3fd22d0d8f4186e30edecd860f23fd description=kube-system/storage-provisioner/storage-provisioner id=4f969c9f-8845-4412-b24f-e780eb6068e8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9d76fad66ab674fdb6d96a586ff07b63771e9f80ffb0da6d960f75270994737e
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.473504065Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.479286252Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.479449553Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.479659115Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.500865649Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.502400176Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.502551702Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.511806492Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.511960258Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.51203262Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.515388889Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.515422391Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.515444882Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.526060264Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.526097122Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	3955a976d16cd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   18 seconds ago       Running             storage-provisioner       3                   9d76fad66ab67       storage-provisioner                 kube-system
	b70649f38d4c7       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   50 seconds ago       Running             busybox                   2                   2d6e6e05d930c       busybox-7b57f96db7-fbwpv            default
	244a77fe1563d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   50 seconds ago       Running             coredns                   2                   ac0ef71240719       coredns-66bc5c9577-p7nbg            kube-system
	45c33b76be4e1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   50 seconds ago       Running             kindnet-cni               2                   0e97ce88bd2d3       kindnet-72mvm                       kube-system
	8aea864f19933       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   50 seconds ago       Running             kube-proxy                2                   c1b0887367928       kube-proxy-stgvm                    kube-system
	6d80af764ee06       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   50 seconds ago       Running             coredns                   2                   ed23b1fbdbbb3       coredns-66bc5c9577-f6v2w            kube-system
	f2f15c809753a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   50 seconds ago       Exited              storage-provisioner       2                   9d76fad66ab67       storage-provisioner                 kube-system
	4cff6e37b85af       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   51 seconds ago       Running             kube-controller-manager   8                   c14a7cc20dbd7       kube-controller-manager-ha-181800   kube-system
	787ba7d1db588       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Running             kube-apiserver            8                   aedac42fff114       kube-apiserver-ha-181800            kube-system
	bd6f9d7be6037       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   7                   c14a7cc20dbd7       kube-controller-manager-ha-181800   kube-system
	7df0159a16497       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            7                   aedac42fff114       kube-apiserver-ha-181800            kube-system
	8d49f8f056288       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   2 minutes ago        Running             etcd                      2                   c5458ae9aa01d       etcd-ha-181800                      kube-system
	42139c5070f82       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   2 minutes ago        Running             kube-vip                  1                   ac5de0631c6c9       kube-vip-ha-181800                  kube-system
	fb83e2f9880f4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   2 minutes ago        Running             kube-scheduler            2                   042db5c7b2fa5       kube-scheduler-ha-181800            kube-system
	
	
	==> coredns [244a77fe1563d266b1c09476ad0f3463ffeb31f96c85ba703ffe04a24a967497] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42812 - 40298 "HINFO IN 6519948929031597716.8341788919287889456. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016440056s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [6d80af764ee0602bdd0407c66fcc9de24c8b7b254f4ce667725e048906d15a87] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35970 - 34760 "HINFO IN 4620377952315927478.2937315152384107880. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029628682s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-181800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T17_33_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:33:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:51:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 17:50:10 +0000   Sat, 18 Oct 2025 17:33:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 17:50:10 +0000   Sat, 18 Oct 2025 17:33:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 17:50:10 +0000   Sat, 18 Oct 2025 17:33:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 17:50:10 +0000   Sat, 18 Oct 2025 17:34:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-181800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                7dc9b150-98ed-4d4d-b680-5759a1e067a9
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-fbwpv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-66bc5c9577-f6v2w             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 coredns-66bc5c9577-p7nbg             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 etcd-ha-181800                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-72mvm                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-ha-181800             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-181800    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-stgvm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-181800             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-181800                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 48s                    kube-proxy       
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 8m58s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)      kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)      kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x8 over 17m)      kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  17m                    kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m                    kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m                    kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-181800 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)      kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           9m17s                  node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   NodeHasSufficientMemory  2m47s (x8 over 2m47s)  kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m47s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m47s (x8 over 2m47s)  kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x8 over 2m47s)  kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   RegisteredNode           47s                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   RegisteredNode           23s                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	
	
	Name:               ha-181800-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T17_34_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:34:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:51:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 17:51:00 +0000   Sat, 18 Oct 2025 17:50:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 17:51:00 +0000   Sat, 18 Oct 2025 17:50:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 17:51:00 +0000   Sat, 18 Oct 2025 17:50:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 17:51:00 +0000   Sat, 18 Oct 2025 17:50:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-181800-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                b2dd8f24-78e0-4eba-8b0c-b12412f7af7d
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-cp9q6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 etcd-ha-181800-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-86s8z                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-ha-181800-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-181800-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-dpwpn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-181800-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-181800-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 19s                    kube-proxy       
	  Normal   RegisteredNode           17m                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           16m                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-181800-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  13m (x9 over 13m)      kubelet          Node ha-181800-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-181800-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeNotReady             13m                    node-controller  Node ha-181800-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        12m                    kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           11m                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           9m17s                  node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   NodeNotReady             8m27s                  node-controller  Node ha-181800-m02 status is now: NodeNotReady
	  Warning  CgroupV1                 2m45s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m45s (x8 over 2m45s)  kubelet          Node ha-181800-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m45s (x8 over 2m45s)  kubelet          Node ha-181800-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m45s (x8 over 2m45s)  kubelet          Node ha-181800-m02 status is now: NodeHasSufficientPID
	  Warning  ContainerGCFailed        105s                   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           52s                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           47s                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           23s                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	
	
	Name:               ha-181800-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T17_35_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:35:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:51:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 17:50:34 +0000   Sat, 18 Oct 2025 17:50:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 17:50:34 +0000   Sat, 18 Oct 2025 17:50:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 17:50:34 +0000   Sat, 18 Oct 2025 17:50:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 17:50:34 +0000   Sat, 18 Oct 2025 17:50:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-181800-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                4a1abf8a-63a3-4737-81ec-1878616c489b
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-lzcbm                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 etcd-ha-181800-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-9qbbw                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-181800-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-181800-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-qsqmb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-181800-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-181800-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 26s                kube-proxy       
	  Normal   RegisteredNode           15m                node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   RegisteredNode           15m                node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   RegisteredNode           15m                node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   RegisteredNode           9m18s              node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   NodeNotReady             8m28s              node-controller  Node ha-181800-m03 status is now: NodeNotReady
	  Normal   RegisteredNode           53s                node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   NodeHasSufficientMemory  52s (x8 over 52s)  kubelet          Node ha-181800-m03 status is now: NodeHasSufficientMemory
	  Normal   Starting                 52s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 52s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    52s (x8 over 52s)  kubelet          Node ha-181800-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     52s (x8 over 52s)  kubelet          Node ha-181800-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   RegisteredNode           24s                node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	
	
	Name:               ha-181800-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T17_36_11_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:36:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:51:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 17:50:50 +0000   Sat, 18 Oct 2025 17:50:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 17:50:50 +0000   Sat, 18 Oct 2025 17:50:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 17:50:50 +0000   Sat, 18 Oct 2025 17:50:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 17:50:50 +0000   Sat, 18 Oct 2025 17:50:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-181800-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                afc79373-b3a1-4495-8f28-5c3685ad131e
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-88bv7       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-proxy-fj4ww    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 7s                 kube-proxy       
	  Normal   Starting                 14m                kube-proxy       
	  Warning  CgroupV1                 14m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     14m (x3 over 14m)  kubelet          Node ha-181800-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x3 over 14m)  kubelet          Node ha-181800-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m (x3 over 14m)  kubelet          Node ha-181800-m04 status is now: NodeHasSufficientMemory
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           14m                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   NodeReady                14m                kubelet          Node ha-181800-m04 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           9m18s              node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   NodeNotReady             8m28s              node-controller  Node ha-181800-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           53s                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           48s                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   Starting                 28s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 28s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  25s (x8 over 28s)  kubelet          Node ha-181800-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    25s (x8 over 28s)  kubelet          Node ha-181800-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     25s (x8 over 28s)  kubelet          Node ha-181800-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           24s                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	
	
	==> dmesg <==
	[Oct18 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014995] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035776] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.808632] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.418900] kauditd_printk_skb: 36 callbacks suppressed
	[Oct18 17:12] overlayfs: idmapped layers are currently not supported
	[  +0.082393] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct18 17:18] overlayfs: idmapped layers are currently not supported
	[Oct18 17:19] overlayfs: idmapped layers are currently not supported
	[Oct18 17:33] overlayfs: idmapped layers are currently not supported
	[ +35.716082] overlayfs: idmapped layers are currently not supported
	[Oct18 17:35] overlayfs: idmapped layers are currently not supported
	[Oct18 17:36] overlayfs: idmapped layers are currently not supported
	[Oct18 17:37] overlayfs: idmapped layers are currently not supported
	[Oct18 17:39] overlayfs: idmapped layers are currently not supported
	[  +3.088699] overlayfs: idmapped layers are currently not supported
	[Oct18 17:48] overlayfs: idmapped layers are currently not supported
	[  +2.594489] overlayfs: idmapped layers are currently not supported
	[Oct18 17:50] overlayfs: idmapped layers are currently not supported
	[ +42.240353] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8d49f8f05628805a90b3d99b19810fe13d13747bb11c8daf730344aef4d339f6] <==
	{"level":"warn","ts":"2025-10-18T17:50:18.925210Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7"}
	{"level":"warn","ts":"2025-10-18T17:50:18.926176Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","error":"unexpected EOF"}
	{"level":"warn","ts":"2025-10-18T17:50:18.926413Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","error":"unexpected EOF"}
	{"level":"warn","ts":"2025-10-18T17:50:19.134085Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7"}
	{"level":"warn","ts":"2025-10-18T17:50:20.526420Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"99f9e9c79f233aa7","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-18T17:50:20.526489Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"99f9e9c79f233aa7","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-18T17:50:23.070868Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"99f9e9c79f233aa7","rtt":"90.25454ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-18T17:50:23.070914Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"99f9e9c79f233aa7","rtt":"92.129317ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-18T17:50:24.527842Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"99f9e9c79f233aa7","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-18T17:50:24.527894Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"99f9e9c79f233aa7","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-18T17:50:28.072088Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"99f9e9c79f233aa7","rtt":"92.129317ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-18T17:50:28.072113Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"99f9e9c79f233aa7","rtt":"90.25454ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-18T17:50:28.529752Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"99f9e9c79f233aa7","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-18T17:50:28.529805Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"99f9e9c79f233aa7","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-10-18T17:50:29.518632Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"99f9e9c79f233aa7","stream-type":"stream Message"}
	{"level":"info","ts":"2025-10-18T17:50:29.518747Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"99f9e9c79f233aa7"}
	{"level":"info","ts":"2025-10-18T17:50:29.518785Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7"}
	{"level":"info","ts":"2025-10-18T17:50:29.531459Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"99f9e9c79f233aa7","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-10-18T17:50:29.531564Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7"}
	{"level":"info","ts":"2025-10-18T17:50:29.566541Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7"}
	{"level":"info","ts":"2025-10-18T17:50:29.568631Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7"}
	{"level":"warn","ts":"2025-10-18T17:50:55.845679Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.649618ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" limit:1 ","response":"range_response_count:1 size:4149"}
	{"level":"info","ts":"2025-10-18T17:50:55.845748Z","caller":"traceutil/trace.go:172","msg":"trace[408244320] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-66bc5c9577; range_end:; response_count:1; response_revision:3630; }","duration":"101.731957ms","start":"2025-10-18T17:50:55.744004Z","end":"2025-10-18T17:50:55.845736Z","steps":["trace[408244320] 'agreement among raft nodes before linearized reading'  (duration: 98.079225ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T17:51:04.884807Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"171.562827ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" limit:500 ","response":"range_response_count:500 size:368084"}
	{"level":"info","ts":"2025-10-18T17:51:04.884868Z","caller":"traceutil/trace.go:172","msg":"trace[1495845859] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:500; response_revision:3673; }","duration":"171.651402ms","start":"2025-10-18T17:51:04.713205Z","end":"2025-10-18T17:51:04.884856Z","steps":["trace[1495845859] 'range keys from bolt db'  (duration: 170.557243ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:51:05 up  1:33,  0 user,  load average: 5.18, 2.25, 1.42
	Linux ha-181800 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [45c33b76be4e1c5e61c683306b76aeb0fcbfda863ba2562aee4d85f222728470] <==
	E1018 17:50:44.474031       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 17:50:44.474037       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 17:50:44.474320       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1018 17:50:45.874766       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 17:50:45.874813       1 metrics.go:72] Registering metrics
	I1018 17:50:45.874874       1 controller.go:711] "Syncing nftables rules"
	I1018 17:50:54.472199       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:50:54.472257       1 main.go:301] handling current node
	I1018 17:50:54.476828       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 17:50:54.476996       1 main.go:324] Node ha-181800-m02 has CIDR [10.244.1.0/24] 
	I1018 17:50:54.477327       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.49.3 Flags: [] Table: 0 Realm: 0} 
	I1018 17:50:54.478599       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1018 17:50:54.478620       1 main.go:324] Node ha-181800-m03 has CIDR [10.244.2.0/24] 
	I1018 17:50:54.478703       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.49.4 Flags: [] Table: 0 Realm: 0} 
	I1018 17:50:54.478760       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 17:50:54.478767       1 main.go:324] Node ha-181800-m04 has CIDR [10.244.3.0/24] 
	I1018 17:50:54.478814       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0 Realm: 0} 
	I1018 17:51:04.473366       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 17:51:04.473481       1 main.go:324] Node ha-181800-m02 has CIDR [10.244.1.0/24] 
	I1018 17:51:04.473778       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1018 17:51:04.473819       1 main.go:324] Node ha-181800-m03 has CIDR [10.244.2.0/24] 
	I1018 17:51:04.473921       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 17:51:04.473929       1 main.go:324] Node ha-181800-m04 has CIDR [10.244.3.0/24] 
	I1018 17:51:04.474005       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:51:04.474011       1 main.go:301] handling current node
	
	
	==> kube-apiserver [787ba7d1db5885d5987b39cc564271b65d0c3534789595970e69e1fc2af692fa] <==
	I1018 17:50:08.637365       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 17:50:08.648586       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 17:50:08.649478       1 aggregator.go:171] initial CRD sync complete...
	I1018 17:50:08.658365       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 17:50:08.658478       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 17:50:08.658528       1 cache.go:39] Caches are synced for autoregister controller
	I1018 17:50:08.648742       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 17:50:08.660408       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 17:50:08.685820       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 17:50:08.685952       1 policy_source.go:240] refreshing policies
	I1018 17:50:08.705489       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1018 17:50:08.711819       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 17:50:08.721543       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 17:50:08.729935       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 17:50:08.730318       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 17:50:08.730492       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 17:50:08.730520       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 17:50:08.730960       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 17:50:08.746648       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 17:50:08.747504       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 17:50:09.243989       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 17:50:13.235609       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 17:50:36.709527       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 17:50:36.815877       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 17:50:46.351258       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [7df0159a16497989a32ac40623e8901229679b8716e6b590b84a0d3e1054f4d6] <==
	I1018 17:49:21.128362       1 server.go:150] Version: v1.34.1
	I1018 17:49:21.128401       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1018 17:49:22.017042       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1018 17:49:22.017075       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1018 17:49:22.017084       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1018 17:49:22.017089       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1018 17:49:22.017094       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1018 17:49:22.017098       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1018 17:49:22.017103       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1018 17:49:22.017107       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1018 17:49:22.017111       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1018 17:49:22.017116       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1018 17:49:22.017120       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1018 17:49:22.017125       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1018 17:49:22.035548       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 17:49:22.037326       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1018 17:49:22.037937       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1018 17:49:22.044391       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 17:49:22.056396       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1018 17:49:22.056496       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1018 17:49:22.056813       1 instance.go:239] Using reconciler: lease
	W1018 17:49:22.058127       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 17:49:42.034705       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 17:49:42.036960       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1018 17:49:42.058557       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [4cff6e37b85af70621f4b47faf3b854223fcae935be9ad45a9a99a523f33574b] <==
	I1018 17:50:17.456776       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 17:50:17.456827       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 17:50:17.459438       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 17:50:17.465430       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 17:50:17.465499       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 17:50:17.471365       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 17:50:17.475715       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 17:50:17.477471       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 17:50:17.478740       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800-m03"
	I1018 17:50:17.478810       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800-m04"
	I1018 17:50:17.478834       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800"
	I1018 17:50:17.478868       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800-m02"
	I1018 17:50:17.479116       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 17:50:17.483521       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 17:50:17.491656       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 17:50:17.491691       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 17:50:17.491699       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 17:50:17.491580       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 17:50:17.503394       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 17:50:17.508362       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 17:50:17.509154       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 17:50:50.411726       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-181800-m04"
	I1018 17:50:55.780269       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-kgtwl EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-kgtwl\": the object has been modified; please apply your changes to the latest version and try again"
	I1018 17:50:55.782431       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"9f28e5d3-f804-46e7-b8a3-f9f96165b245", APIVersion:"v1", ResourceVersion:"306", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-kgtwl EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-kgtwl": the object has been modified; please apply your changes to the latest version and try again
	E1018 17:50:55.860481       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-controller-manager [bd6f9d7be603729a0a5200b910dc4c63002c84e58b83cb98debb890cf0bf202d] <==
	I1018 17:49:24.964069       1 serving.go:386] Generated self-signed cert in-memory
	I1018 17:49:25.434782       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1018 17:49:25.434808       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 17:49:25.436324       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 17:49:25.436542       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1018 17:49:25.436706       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1018 17:49:25.436723       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 17:49:45.439754       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-proxy [8aea864f19933a28597488b60aa422e08bea2bfd07e84bd2fec57087062dc95f] <==
	I1018 17:50:15.663641       1 server_linux.go:53] "Using iptables proxy"
	I1018 17:50:16.334903       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 17:50:16.464013       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 17:50:16.464050       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 17:50:16.464138       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 17:50:16.493669       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 17:50:16.493728       1 server_linux.go:132] "Using iptables Proxier"
	I1018 17:50:16.497992       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 17:50:16.498301       1 server.go:527] "Version info" version="v1.34.1"
	I1018 17:50:16.498377       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 17:50:16.507101       1 config.go:200] "Starting service config controller"
	I1018 17:50:16.507206       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 17:50:16.507258       1 config.go:106] "Starting endpoint slice config controller"
	I1018 17:50:16.507322       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 17:50:16.507360       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 17:50:16.507388       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 17:50:16.510070       1 config.go:309] "Starting node config controller"
	I1018 17:50:16.510095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 17:50:16.510103       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 17:50:16.607760       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 17:50:16.607802       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 17:50:16.607844       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fb83e2f9880f48e77ccba9ff1a0240a5eacc8c5f0b7758c70e7c19289ba8795a] <==
	E1018 17:49:18.573967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 17:49:18.841410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 17:49:19.275891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 17:49:19.842357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 17:49:20.476775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 17:49:37.786434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 17:49:40.793324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 17:49:41.589579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 17:49:41.815367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 17:49:41.825500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 17:49:42.301676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 17:49:43.065969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:39986->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 17:49:43.066081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:40092->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 17:49:43.066189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:40068->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 17:49:43.066278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:40058->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 17:49:43.066363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:40052->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 17:49:43.066439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:40100->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 17:49:43.066513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:40112->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 17:49:43.066604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:40060->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 17:49:43.066611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:39994->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 17:49:43.066695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:40008->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 17:49:43.066704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:40038->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 17:49:43.066779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:39980->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 17:49:43.066800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:40040->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1018 17:50:11.465530       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 17:50:12 ha-181800 kubelet[798]: I1018 17:50:12.842479     798 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: E1018 17:50:12.856112     798 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-181800\" already exists" pod="kube-system/kube-controller-manager-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: I1018 17:50:12.856349     798 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: E1018 17:50:12.867959     798 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-181800\" already exists" pod="kube-system/kube-scheduler-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: I1018 17:50:12.868003     798 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: E1018 17:50:12.881408     798 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-vip-ha-181800\" already exists" pod="kube-system/kube-vip-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: I1018 17:50:12.881451     798 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: E1018 17:50:12.896352     798 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-181800\" already exists" pod="kube-system/etcd-ha-181800"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.091654     798 apiserver.go:52] "Watching apiserver"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.099077     798 scope.go:117] "RemoveContainer" containerID="bd6f9d7be603729a0a5200b910dc4c63002c84e58b83cb98debb890cf0bf202d"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.216894     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5edfc356-9d49-4895-b36a-06c2bd39155a-xtables-lock\") pod \"kindnet-72mvm\" (UID: \"5edfc356-9d49-4895-b36a-06c2bd39155a\") " pod="kube-system/kindnet-72mvm"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.217054     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15b89226-91ae-478f-acfe-7841776b1377-xtables-lock\") pod \"kube-proxy-stgvm\" (UID: \"15b89226-91ae-478f-acfe-7841776b1377\") " pod="kube-system/kube-proxy-stgvm"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.217077     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15b89226-91ae-478f-acfe-7841776b1377-lib-modules\") pod \"kube-proxy-stgvm\" (UID: \"15b89226-91ae-478f-acfe-7841776b1377\") " pod="kube-system/kube-proxy-stgvm"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.217093     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3c6521cd-8e1b-46aa-96a3-39e475e1426c-tmp\") pod \"storage-provisioner\" (UID: \"3c6521cd-8e1b-46aa-96a3-39e475e1426c\") " pod="kube-system/storage-provisioner"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.217110     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5edfc356-9d49-4895-b36a-06c2bd39155a-cni-cfg\") pod \"kindnet-72mvm\" (UID: \"5edfc356-9d49-4895-b36a-06c2bd39155a\") " pod="kube-system/kindnet-72mvm"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.217127     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5edfc356-9d49-4895-b36a-06c2bd39155a-lib-modules\") pod \"kindnet-72mvm\" (UID: \"5edfc356-9d49-4895-b36a-06c2bd39155a\") " pod="kube-system/kindnet-72mvm"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.222063     798 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.266801     798 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 17:50:13 ha-181800 kubelet[798]: W1018 17:50:13.559633     798 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/crio-c1b08873679284c397e63dc0b5e86a2778290edfaa47a2d3af86e787870c2624 WatchSource:0}: Error finding container c1b08873679284c397e63dc0b5e86a2778290edfaa47a2d3af86e787870c2624: Status 404 returned error can't find the container with id c1b08873679284c397e63dc0b5e86a2778290edfaa47a2d3af86e787870c2624
	Oct 18 17:50:13 ha-181800 kubelet[798]: W1018 17:50:13.569533     798 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/crio-0e97ce88bd2d3a36101a0a9930710ba30f34091e61ed0ed0249bd68b5d0f6fa7 WatchSource:0}: Error finding container 0e97ce88bd2d3a36101a0a9930710ba30f34091e61ed0ed0249bd68b5d0f6fa7: Status 404 returned error can't find the container with id 0e97ce88bd2d3a36101a0a9930710ba30f34091e61ed0ed0249bd68b5d0f6fa7
	Oct 18 17:50:13 ha-181800 kubelet[798]: W1018 17:50:13.789592     798 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/crio-2d6e6e05d930c610e9ac4942479166d3061f0b37055dbc9645478f2923f1ff53 WatchSource:0}: Error finding container 2d6e6e05d930c610e9ac4942479166d3061f0b37055dbc9645478f2923f1ff53: Status 404 returned error can't find the container with id 2d6e6e05d930c610e9ac4942479166d3061f0b37055dbc9645478f2923f1ff53
	Oct 18 17:50:17 ha-181800 kubelet[798]: E1018 17:50:17.091585     798 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/351deab77f22682d337e98537451625e6f5def60ef97378fe2ea489cd9cb173d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/351deab77f22682d337e98537451625e6f5def60ef97378fe2ea489cd9cb173d/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-181800_9656c3d6ff12279b641632c7e3275a8a/kube-controller-manager/6.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-181800_9656c3d6ff12279b641632c7e3275a8a/kube-controller-manager/6.log: no such file or directory
	Oct 18 17:50:17 ha-181800 kubelet[798]: E1018 17:50:17.097904     798 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3a8ceae8950ea9bca2bf6a05f4cb7633f55f4458c755f32741110642edbfd7ba/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3a8ceae8950ea9bca2bf6a05f4cb7633f55f4458c755f32741110642edbfd7ba/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-apiserver-ha-181800_f173b0166ea7317b529b58e20ef8d65f/kube-apiserver/6.log" to get inode usage: stat /var/log/pods/kube-system_kube-apiserver-ha-181800_f173b0166ea7317b529b58e20ef8d65f/kube-apiserver/6.log: no such file or directory
	Oct 18 17:50:17 ha-181800 kubelet[798]: E1018 17:50:17.148404     798 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/crio/crio-dad8e190116effc9294125133d608015a4f2ec86c95f308f26d5e4d771de4985\": RecentStats: unable to find data in memory cache]"
	Oct 18 17:50:45 ha-181800 kubelet[798]: I1018 17:50:45.570659     798 scope.go:117] "RemoveContainer" containerID="f2f15c809753a0cd811b332e6f6a8f9b5be888da593a2286ff085903e5ec3a12"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-181800 -n ha-181800
helpers_test.go:269: (dbg) Run:  kubectl --context ha-181800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (177.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.065625164s)
ha_test.go:415: expected profile "ha-181800" in json of 'profile list' to have "Degraded" status but have "HAppy" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-181800\",\"Status\":\"HAppy\",\"Config\":{\"Name\":\"ha-181800\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesR
oot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-181800\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name
\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.49.4\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-dev
ice-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\"
:false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-181800
helpers_test.go:243: (dbg) docker inspect ha-181800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2",
	        "Created": "2025-10-18T17:32:56.632116312Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 69617,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T17:48:09.683613005Z",
	            "FinishedAt": "2025-10-18T17:48:08.862033359Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/hostname",
	        "HostsPath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/hosts",
	        "LogPath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2-json.log",
	        "Name": "/ha-181800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-181800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-181800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2",
	                "LowerDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-181800",
	                "Source": "/var/lib/docker/volumes/ha-181800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-181800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-181800",
	                "name.minikube.sigs.k8s.io": "ha-181800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4110ab73f7f9137e0eb013438b540b426c3fa9fedc1bed76ec7ffcc4fc35499f",
	            "SandboxKey": "/var/run/docker/netns/4110ab73f7f9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32818"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32819"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32822"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32820"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32821"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-181800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:81:2f:47:7d:4c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "903568cdf824d38f52cb9a58c116a852c83eb599cf8cc87e25ba21b593e45142",
	                    "EndpointID": "9a2af9d91b868a8642ef1db81d818bc623c9c1134408c932f61ec269578e0c92",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-181800",
	                        "5743bf3218eb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-181800 -n ha-181800
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-181800 logs -n 25: (1.617671088s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-181800 cp ha-181800-m03:/home/docker/cp-test.txt ha-181800-m04:/home/docker/cp-test_ha-181800-m03_ha-181800-m04.txt               │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test_ha-181800-m03_ha-181800-m04.txt                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp testdata/cp-test.txt ha-181800-m04:/home/docker/cp-test.txt                                                             │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1463328482/001/cp-test_ha-181800-m04.txt │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt ha-181800:/home/docker/cp-test_ha-181800-m04_ha-181800.txt                       │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800 sudo cat /home/docker/cp-test_ha-181800-m04_ha-181800.txt                                                 │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt ha-181800-m02:/home/docker/cp-test_ha-181800-m04_ha-181800-m02.txt               │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m02 sudo cat /home/docker/cp-test_ha-181800-m04_ha-181800-m02.txt                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt ha-181800-m03:/home/docker/cp-test_ha-181800-m04_ha-181800-m03.txt               │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m03 sudo cat /home/docker/cp-test_ha-181800-m04_ha-181800-m03.txt                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ node    │ ha-181800 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ node    │ ha-181800 node start m02 --alsologtostderr -v 5                                                                                      │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:39 UTC │
	│ node    │ ha-181800 node list --alsologtostderr -v 5                                                                                           │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:39 UTC │                     │
	│ stop    │ ha-181800 stop --alsologtostderr -v 5                                                                                                │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:39 UTC │ 18 Oct 25 17:39 UTC │
	│ start   │ ha-181800 start --wait true --alsologtostderr -v 5                                                                                   │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:39 UTC │                     │
	│ node    │ ha-181800 node list --alsologtostderr -v 5                                                                                           │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:47 UTC │                     │
	│ node    │ ha-181800 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:47 UTC │                     │
	│ stop    │ ha-181800 stop --alsologtostderr -v 5                                                                                                │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:47 UTC │ 18 Oct 25 17:48 UTC │
	│ start   │ ha-181800 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:48 UTC │ 18 Oct 25 17:51 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 17:48:09
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 17:48:09.416034   69488 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:48:09.416413   69488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:48:09.416429   69488 out.go:374] Setting ErrFile to fd 2...
	I1018 17:48:09.416435   69488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:48:09.416751   69488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:48:09.417210   69488 out.go:368] Setting JSON to false
	I1018 17:48:09.418048   69488 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5439,"bootTime":1760804251,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 17:48:09.418116   69488 start.go:141] virtualization:  
	I1018 17:48:09.421406   69488 out.go:179] * [ha-181800] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 17:48:09.425201   69488 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 17:48:09.425270   69488 notify.go:220] Checking for updates...
	I1018 17:48:09.431395   69488 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 17:48:09.434249   69488 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:48:09.437177   69488 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 17:48:09.439990   69488 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 17:48:09.442873   69488 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 17:48:09.446186   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:48:09.446753   69488 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 17:48:09.469689   69488 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 17:48:09.469810   69488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:48:09.525756   69488 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 17:48:09.516473467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:48:09.525901   69488 docker.go:318] overlay module found
	I1018 17:48:09.529121   69488 out.go:179] * Using the docker driver based on existing profile
	I1018 17:48:09.532020   69488 start.go:305] selected driver: docker
	I1018 17:48:09.532065   69488 start.go:925] validating driver "docker" against &{Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:48:09.532200   69488 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 17:48:09.532300   69488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:48:09.595274   69488 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 17:48:09.586260967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:48:09.595672   69488 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:48:09.595711   69488 cni.go:84] Creating CNI manager for ""
	I1018 17:48:09.595769   69488 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1018 17:48:09.595821   69488 start.go:349] cluster config:
	{Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:48:09.600762   69488 out.go:179] * Starting "ha-181800" primary control-plane node in "ha-181800" cluster
	I1018 17:48:09.603624   69488 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:48:09.606573   69488 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:48:09.609415   69488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:48:09.609455   69488 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:48:09.609472   69488 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 17:48:09.609485   69488 cache.go:58] Caching tarball of preloaded images
	I1018 17:48:09.609580   69488 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:48:09.609590   69488 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:48:09.609731   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:48:09.629715   69488 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:48:09.629738   69488 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:48:09.629751   69488 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:48:09.629773   69488 start.go:360] acquireMachinesLock for ha-181800: {Name:mk3f5dfba2ab7d01f94f924dfcc5edab5f076901 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:48:09.629829   69488 start.go:364] duration metric: took 36.414µs to acquireMachinesLock for "ha-181800"
	I1018 17:48:09.629854   69488 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:48:09.629859   69488 fix.go:54] fixHost starting: 
	I1018 17:48:09.630111   69488 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:48:09.646601   69488 fix.go:112] recreateIfNeeded on ha-181800: state=Stopped err=<nil>
	W1018 17:48:09.646633   69488 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:48:09.649905   69488 out.go:252] * Restarting existing docker container for "ha-181800" ...
	I1018 17:48:09.649988   69488 cli_runner.go:164] Run: docker start ha-181800
	I1018 17:48:09.903186   69488 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:48:09.925021   69488 kic.go:430] container "ha-181800" state is running.
	I1018 17:48:09.925620   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:48:09.948773   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:48:09.949327   69488 machine.go:93] provisionDockerMachine start ...
	I1018 17:48:09.949403   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:09.972918   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:09.973247   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1018 17:48:09.973265   69488 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:48:09.973813   69488 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 17:48:13.124675   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800
	
	I1018 17:48:13.124706   69488 ubuntu.go:182] provisioning hostname "ha-181800"
	I1018 17:48:13.124768   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:13.142493   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:13.142802   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1018 17:48:13.142819   69488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181800 && echo "ha-181800" | sudo tee /etc/hostname
	I1018 17:48:13.298978   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800
	
	I1018 17:48:13.299071   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:13.318549   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:13.318864   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1018 17:48:13.318885   69488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181800/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:48:13.464891   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:48:13.464913   69488 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:48:13.464930   69488 ubuntu.go:190] setting up certificates
	I1018 17:48:13.464957   69488 provision.go:84] configureAuth start
	I1018 17:48:13.465015   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:48:13.482208   69488 provision.go:143] copyHostCerts
	I1018 17:48:13.482250   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:48:13.482283   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:48:13.482302   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:48:13.482377   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:48:13.482463   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:48:13.482486   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:48:13.482493   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:48:13.482520   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:48:13.482562   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:48:13.482582   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:48:13.482588   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:48:13.482612   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:48:13.482660   69488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.ha-181800 san=[127.0.0.1 192.168.49.2 ha-181800 localhost minikube]
	I1018 17:48:14.423915   69488 provision.go:177] copyRemoteCerts
	I1018 17:48:14.423988   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:48:14.424038   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:14.441172   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:48:14.544666   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 17:48:14.544730   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1018 17:48:14.562271   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 17:48:14.562355   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:48:14.579774   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 17:48:14.579882   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:48:14.597738   69488 provision.go:87] duration metric: took 1.132758135s to configureAuth
	I1018 17:48:14.597766   69488 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:48:14.598014   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:48:14.598118   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:14.616530   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:14.616832   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1018 17:48:14.616852   69488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:48:14.938623   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:48:14.938694   69488 machine.go:96] duration metric: took 4.989343324s to provisionDockerMachine
	I1018 17:48:14.938719   69488 start.go:293] postStartSetup for "ha-181800" (driver="docker")
	I1018 17:48:14.938743   69488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:48:14.938827   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:48:14.938907   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:14.961006   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:48:15.069145   69488 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:48:15.072788   69488 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:48:15.072820   69488 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:48:15.072832   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:48:15.072889   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:48:15.073008   69488 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:48:15.073020   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 17:48:15.073124   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 17:48:15.080710   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:48:15.098679   69488 start.go:296] duration metric: took 159.932309ms for postStartSetup
	I1018 17:48:15.098839   69488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:48:15.098888   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:15.116684   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:48:15.217789   69488 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:48:15.222543   69488 fix.go:56] duration metric: took 5.59267659s for fixHost
	I1018 17:48:15.222570   69488 start.go:83] releasing machines lock for "ha-181800", held for 5.59272729s
	I1018 17:48:15.222640   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:48:15.239602   69488 ssh_runner.go:195] Run: cat /version.json
	I1018 17:48:15.239657   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:15.239935   69488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:48:15.239989   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:15.258489   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:48:15.259704   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:48:15.360628   69488 ssh_runner.go:195] Run: systemctl --version
	I1018 17:48:15.453252   69488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:48:15.490459   69488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:48:15.494882   69488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:48:15.494987   69488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:48:15.502526   69488 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:48:15.502555   69488 start.go:495] detecting cgroup driver to use...
	I1018 17:48:15.502585   69488 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:48:15.502634   69488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:48:15.518083   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:48:15.531171   69488 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:48:15.531254   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:48:15.547013   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:48:15.559697   69488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:48:15.666369   69488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:48:15.774518   69488 docker.go:234] disabling docker service ...
	I1018 17:48:15.774580   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:48:15.789730   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:48:15.802288   69488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:48:15.919408   69488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:48:16.029842   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:48:16.043317   69488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:48:16.059310   69488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:48:16.059453   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.069280   69488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:48:16.069350   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.078814   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.087874   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.097837   69488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:48:16.106890   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.115708   69488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.123935   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.132770   69488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:48:16.140320   69488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:48:16.147761   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:48:16.260916   69488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:48:16.404712   69488 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:48:16.404830   69488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:48:16.408509   69488 start.go:563] Will wait 60s for crictl version
	I1018 17:48:16.408623   69488 ssh_runner.go:195] Run: which crictl
	I1018 17:48:16.411907   69488 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:48:16.435137   69488 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:48:16.435295   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:48:16.466039   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:48:16.501936   69488 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:48:16.504878   69488 cli_runner.go:164] Run: docker network inspect ha-181800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:48:16.520780   69488 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:48:16.524665   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:48:16.534613   69488 kubeadm.go:883] updating cluster {Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 17:48:16.534762   69488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:48:16.534819   69488 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 17:48:16.574503   69488 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 17:48:16.574531   69488 crio.go:433] Images already preloaded, skipping extraction
	I1018 17:48:16.574590   69488 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 17:48:16.600203   69488 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 17:48:16.600227   69488 cache_images.go:85] Images are preloaded, skipping loading
	I1018 17:48:16.600237   69488 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 17:48:16.600342   69488 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-181800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:48:16.600422   69488 ssh_runner.go:195] Run: crio config
	I1018 17:48:16.665910   69488 cni.go:84] Creating CNI manager for ""
	I1018 17:48:16.665937   69488 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1018 17:48:16.665961   69488 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 17:48:16.665986   69488 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-181800 NodeName:ha-181800 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 17:48:16.666112   69488 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-181800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 17:48:16.666132   69488 kube-vip.go:115] generating kube-vip config ...
	I1018 17:48:16.666191   69488 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 17:48:16.678158   69488 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:48:16.678333   69488 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 17:48:16.678419   69488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:48:16.686215   69488 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:48:16.686327   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1018 17:48:16.693873   69488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1018 17:48:16.706512   69488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:48:16.719311   69488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1018 17:48:16.731738   69488 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 17:48:16.744107   69488 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 17:48:16.747479   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:48:16.756979   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:48:16.873983   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:48:16.890078   69488 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800 for IP: 192.168.49.2
	I1018 17:48:16.890141   69488 certs.go:195] generating shared ca certs ...
	I1018 17:48:16.890170   69488 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:48:16.890342   69488 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:48:16.890408   69488 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:48:16.890429   69488 certs.go:257] generating profile certs ...
	I1018 17:48:16.890571   69488 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key
	I1018 17:48:16.890683   69488 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.46a58690
	I1018 17:48:16.890745   69488 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key
	I1018 17:48:16.890767   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 17:48:16.890806   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 17:48:16.890839   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 17:48:16.890866   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 17:48:16.890905   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 17:48:16.890937   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 17:48:16.890965   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 17:48:16.891003   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 17:48:16.891075   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:48:16.891135   69488 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:48:16.891163   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:48:16.891206   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:48:16.891265   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:48:16.891308   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:48:16.891389   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:48:16.891447   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 17:48:16.891488   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:48:16.891521   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 17:48:16.892071   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:48:16.910107   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:48:16.927560   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:48:16.944252   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:48:16.961007   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 17:48:16.981715   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 17:48:17.002129   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 17:48:17.028151   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 17:48:17.050134   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:48:17.076842   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:48:17.102342   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:48:17.120809   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 17:48:17.135197   69488 ssh_runner.go:195] Run: openssl version
	I1018 17:48:17.141316   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:48:17.149779   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:48:17.156384   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:48:17.156498   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:48:17.198104   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:48:17.206025   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:48:17.214061   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:48:17.217558   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:48:17.217636   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:48:17.259653   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:48:17.267330   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:48:17.275410   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:48:17.278912   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:48:17.279004   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:48:17.319663   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:48:17.327893   69488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:48:17.331787   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 17:48:17.372669   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 17:48:17.413640   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 17:48:17.455669   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 17:48:17.503310   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 17:48:17.553128   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 17:48:17.610923   69488 kubeadm.go:400] StartCluster: {Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:48:17.611069   69488 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:48:17.611141   69488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:48:17.693793   69488 cri.go:89] found id: "42139c5070f82bb1e1dd7564661f58a74b134ab219b910335d022b2235e65fc0"
	I1018 17:48:17.693817   69488 cri.go:89] found id: "405d4b2711179ef2be985a5942049e2e36688b992d1fd9f96f2e882cfa95bfd5"
	I1018 17:48:17.693822   69488 cri.go:89] found id: "fb83e2f9880f48e77ccba9ff1a0240a5eacc8c5f0b7758c70e7c19289ba8795a"
	I1018 17:48:17.693826   69488 cri.go:89] found id: ""
	I1018 17:48:17.693886   69488 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 17:48:17.727781   69488 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:48:17Z" level=error msg="open /run/runc: no such file or directory"
	I1018 17:48:17.727885   69488 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 17:48:17.752985   69488 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 17:48:17.753011   69488 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 17:48:17.753077   69488 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 17:48:17.766549   69488 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:48:17.766998   69488 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-181800" does not appear in /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:48:17.767116   69488 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-2509/kubeconfig needs updating (will repair): [kubeconfig missing "ha-181800" cluster setting kubeconfig missing "ha-181800" context setting]
	I1018 17:48:17.767408   69488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:48:17.768000   69488 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 17:48:17.768691   69488 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 17:48:17.768713   69488 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 17:48:17.768754   69488 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1018 17:48:17.768718   69488 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 17:48:17.768800   69488 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 17:48:17.768817   69488 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 17:48:17.769158   69488 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 17:48:17.777893   69488 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1018 17:48:17.777928   69488 kubeadm.go:601] duration metric: took 24.910349ms to restartPrimaryControlPlane
	I1018 17:48:17.777937   69488 kubeadm.go:402] duration metric: took 167.022952ms to StartCluster
	I1018 17:48:17.777952   69488 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:48:17.778019   69488 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:48:17.778655   69488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:48:17.778876   69488 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:48:17.778908   69488 start.go:241] waiting for startup goroutines ...
	I1018 17:48:17.778916   69488 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 17:48:17.779460   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:48:17.784791   69488 out.go:179] * Enabled addons: 
	I1018 17:48:17.787780   69488 addons.go:514] duration metric: took 8.843165ms for enable addons: enabled=[]
	I1018 17:48:17.787841   69488 start.go:246] waiting for cluster config update ...
	I1018 17:48:17.787851   69488 start.go:255] writing updated cluster config ...
	I1018 17:48:17.791154   69488 out.go:203] 
	I1018 17:48:17.794423   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:48:17.794545   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:48:17.797951   69488 out.go:179] * Starting "ha-181800-m02" control-plane node in "ha-181800" cluster
	I1018 17:48:17.800906   69488 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:48:17.803852   69488 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:48:17.806813   69488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:48:17.806848   69488 cache.go:58] Caching tarball of preloaded images
	I1018 17:48:17.806951   69488 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:48:17.806966   69488 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:48:17.807089   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:48:17.807301   69488 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:48:17.833480   69488 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:48:17.833505   69488 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:48:17.833520   69488 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:48:17.833542   69488 start.go:360] acquireMachinesLock for ha-181800-m02: {Name:mk36a488c0fbfc8557c6ba291b969aad85b45635 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:48:17.833604   69488 start.go:364] duration metric: took 42.142µs to acquireMachinesLock for "ha-181800-m02"
	I1018 17:48:17.833629   69488 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:48:17.833638   69488 fix.go:54] fixHost starting: m02
	I1018 17:48:17.833888   69488 cli_runner.go:164] Run: docker container inspect ha-181800-m02 --format={{.State.Status}}
	I1018 17:48:17.853969   69488 fix.go:112] recreateIfNeeded on ha-181800-m02: state=Stopped err=<nil>
	W1018 17:48:17.853999   69488 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:48:17.859511   69488 out.go:252] * Restarting existing docker container for "ha-181800-m02" ...
	I1018 17:48:17.859599   69488 cli_runner.go:164] Run: docker start ha-181800-m02
	I1018 17:48:18.199583   69488 cli_runner.go:164] Run: docker container inspect ha-181800-m02 --format={{.State.Status}}
	I1018 17:48:18.226549   69488 kic.go:430] container "ha-181800-m02" state is running.
	I1018 17:48:18.226893   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:48:18.262995   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:48:18.263226   69488 machine.go:93] provisionDockerMachine start ...
	I1018 17:48:18.263282   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:48:18.293143   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:18.293452   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1018 17:48:18.293466   69488 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:48:18.294119   69488 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 17:48:21.560416   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m02
	
	I1018 17:48:21.560480   69488 ubuntu.go:182] provisioning hostname "ha-181800-m02"
	I1018 17:48:21.560583   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:48:21.588400   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:21.588705   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1018 17:48:21.588717   69488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181800-m02 && echo "ha-181800-m02" | sudo tee /etc/hostname
	I1018 17:48:21.918738   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m02
	
	I1018 17:48:21.918888   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:48:21.950544   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:21.950842   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1018 17:48:21.950857   69488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181800-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181800-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181800-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:48:22.217685   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:48:22.217712   69488 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:48:22.217727   69488 ubuntu.go:190] setting up certificates
	I1018 17:48:22.217741   69488 provision.go:84] configureAuth start
	I1018 17:48:22.217804   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:48:22.255770   69488 provision.go:143] copyHostCerts
	I1018 17:48:22.255810   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:48:22.255843   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:48:22.255850   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:48:22.255928   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:48:22.255999   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:48:22.256017   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:48:22.256021   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:48:22.256045   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:48:22.256080   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:48:22.256096   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:48:22.256100   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:48:22.256121   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:48:22.256204   69488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.ha-181800-m02 san=[127.0.0.1 192.168.49.3 ha-181800-m02 localhost minikube]
	I1018 17:48:22.398509   69488 provision.go:177] copyRemoteCerts
	I1018 17:48:22.398627   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:48:22.398703   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:48:22.417071   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:48:22.539435   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 17:48:22.539497   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:48:22.590740   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 17:48:22.590799   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:48:22.640636   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 17:48:22.640749   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 17:48:22.682470   69488 provision.go:87] duration metric: took 464.715425ms to configureAuth
	I1018 17:48:22.682541   69488 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:48:22.682832   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:48:22.682993   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:48:22.710684   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:22.710986   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1018 17:48:22.711001   69488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:49:53.355970   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:49:53.355994   69488 machine.go:96] duration metric: took 1m35.092758423s to provisionDockerMachine
	I1018 17:49:53.356005   69488 start.go:293] postStartSetup for "ha-181800-m02" (driver="docker")
	I1018 17:49:53.356016   69488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:49:53.356073   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:49:53.356118   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:49:53.374240   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:49:53.476619   69488 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:49:53.479822   69488 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:49:53.479849   69488 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:49:53.479860   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:49:53.479932   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:49:53.480042   69488 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:49:53.480053   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 17:49:53.480150   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 17:49:53.487506   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:49:53.503781   69488 start.go:296] duration metric: took 147.726679ms for postStartSetup
	I1018 17:49:53.503861   69488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:49:53.503907   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:49:53.521965   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:49:53.622051   69488 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:49:53.627407   69488 fix.go:56] duration metric: took 1m35.793761422s for fixHost
	I1018 17:49:53.627431   69488 start.go:83] releasing machines lock for "ha-181800-m02", held for 1m35.793813517s
	I1018 17:49:53.627503   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:49:53.647527   69488 out.go:179] * Found network options:
	I1018 17:49:53.650482   69488 out.go:179]   - NO_PROXY=192.168.49.2
	W1018 17:49:53.653336   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:49:53.653390   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	I1018 17:49:53.653464   69488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:49:53.653510   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:49:53.653793   69488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:49:53.653863   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:49:53.671905   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:49:53.683540   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:49:53.861179   69488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:49:53.865770   69488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:49:53.865856   69488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:49:53.873670   69488 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:49:53.873694   69488 start.go:495] detecting cgroup driver to use...
	I1018 17:49:53.873745   69488 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:49:53.873813   69488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:49:53.888526   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:49:53.901761   69488 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:49:53.901850   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:49:53.917699   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:49:53.931789   69488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:49:54.071500   69488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:49:54.203057   69488 docker.go:234] disabling docker service ...
	I1018 17:49:54.203122   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:49:54.218563   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:49:54.232433   69488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:49:54.361440   69488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:49:54.490330   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:49:54.503221   69488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:49:54.517805   69488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:49:54.517883   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.527169   69488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:49:54.527231   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.536041   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.544703   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.553243   69488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:49:54.562614   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.571510   69488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.579788   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.588456   69488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:49:54.595820   69488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:49:54.602817   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:49:54.728528   69488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:49:58.621131   69488 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.89256859s)
	I1018 17:49:58.626115   69488 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:49:58.626223   69488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:49:58.631167   69488 start.go:563] Will wait 60s for crictl version
	I1018 17:49:58.631232   69488 ssh_runner.go:195] Run: which crictl
	I1018 17:49:58.639191   69488 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:49:58.672795   69488 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:49:58.672878   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:49:58.723386   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:49:58.777499   69488 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:49:58.780571   69488 out.go:179]   - env NO_PROXY=192.168.49.2
	I1018 17:49:58.783632   69488 cli_runner.go:164] Run: docker network inspect ha-181800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:49:58.815077   69488 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:49:58.819329   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:49:58.831215   69488 mustload.go:65] Loading cluster: ha-181800
	I1018 17:49:58.831449   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:49:58.831716   69488 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:49:58.862708   69488 host.go:66] Checking if "ha-181800" exists ...
	I1018 17:49:58.863022   69488 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800 for IP: 192.168.49.3
	I1018 17:49:58.863040   69488 certs.go:195] generating shared ca certs ...
	I1018 17:49:58.863058   69488 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:49:58.863172   69488 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:49:58.863215   69488 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:49:58.863222   69488 certs.go:257] generating profile certs ...
	I1018 17:49:58.863290   69488 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key
	I1018 17:49:58.863337   69488 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.887e0b27
	I1018 17:49:58.863381   69488 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key
	I1018 17:49:58.863390   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 17:49:58.863402   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 17:49:58.863414   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 17:49:58.863425   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 17:49:58.863435   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 17:49:58.863448   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 17:49:58.863470   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 17:49:58.863481   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 17:49:58.863531   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:49:58.863559   69488 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:49:58.863567   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:49:58.863589   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:49:58.863615   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:49:58.863635   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:49:58.863676   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:49:58.863709   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 17:49:58.863731   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:49:58.863743   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 17:49:58.863871   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:49:58.882935   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:49:58.981280   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1018 17:49:58.984884   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1018 17:49:58.992968   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1018 17:49:58.996547   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1018 17:49:59.005742   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1018 17:49:59.009863   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1018 17:49:59.018651   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1018 17:49:59.022300   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1018 17:49:59.030647   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1018 17:49:59.034128   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1018 17:49:59.042303   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1018 17:49:59.045696   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1018 17:49:59.054134   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:49:59.072336   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:49:59.090250   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:49:59.107793   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:49:59.124795   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 17:49:59.150615   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 17:49:59.169033   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 17:49:59.186177   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 17:49:59.203120   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:49:59.220145   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:49:59.237999   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:49:59.257279   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1018 17:49:59.269634   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1018 17:49:59.282735   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1018 17:49:59.295341   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1018 17:49:59.308329   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1018 17:49:59.320556   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1018 17:49:59.332714   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1018 17:49:59.348902   69488 ssh_runner.go:195] Run: openssl version
	I1018 17:49:59.356738   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:49:59.365172   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:49:59.368839   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:49:59.368976   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:49:59.414784   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:49:59.422423   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:49:59.430191   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:49:59.433619   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:49:59.433727   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:49:59.474255   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:49:59.481911   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:49:59.490061   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:49:59.493763   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:49:59.493835   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:49:59.534567   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:49:59.542475   69488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:49:59.546230   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 17:49:59.592499   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 17:49:59.635764   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 17:49:59.676750   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 17:49:59.719668   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 17:49:59.760653   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 17:49:59.801453   69488 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1018 17:49:59.801594   69488 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-181800-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:49:59.801625   69488 kube-vip.go:115] generating kube-vip config ...
	I1018 17:49:59.801676   69488 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 17:49:59.813138   69488 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:49:59.813221   69488 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 17:49:59.813313   69488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:49:59.820930   69488 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:49:59.821061   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1018 17:49:59.828485   69488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 17:49:59.840643   69488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:49:59.853675   69488 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 17:49:59.867836   69488 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 17:49:59.871456   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:49:59.881052   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:00.019627   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:50:00.063785   69488 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:50:00.065404   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:00.068131   69488 out.go:179] * Verifying Kubernetes components...
	I1018 17:50:00.071263   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:00.372789   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:50:00.393030   69488 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1018 17:50:00.393170   69488 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1018 17:50:00.393487   69488 node_ready.go:35] waiting up to 6m0s for node "ha-181800-m02" to be "Ready" ...
	W1018 17:50:02.394400   69488 node_ready.go:55] error getting node "ha-181800-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-181800-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1018 17:50:08.470080   69488 node_ready.go:57] node "ha-181800-m02" has "Ready":"Unknown" status (will retry)
	I1018 17:50:09.421305   69488 node_ready.go:49] node "ha-181800-m02" is "Ready"
	I1018 17:50:09.421384   69488 node_ready.go:38] duration metric: took 9.02787205s for node "ha-181800-m02" to be "Ready" ...
	I1018 17:50:09.421422   69488 api_server.go:52] waiting for apiserver process to appear ...
	I1018 17:50:09.421500   69488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:50:09.447456   69488 api_server.go:72] duration metric: took 9.383624261s to wait for apiserver process to appear ...
	I1018 17:50:09.447520   69488 api_server.go:88] waiting for apiserver healthz status ...
	I1018 17:50:09.447553   69488 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 17:50:09.466347   69488 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 17:50:09.466422   69488 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 17:50:09.947999   69488 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 17:50:09.958418   69488 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 17:50:09.958509   69488 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 17:50:10.447814   69488 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 17:50:10.462608   69488 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 17:50:10.463984   69488 api_server.go:141] control plane version: v1.34.1
	I1018 17:50:10.464041   69488 api_server.go:131] duration metric: took 1.016500993s to wait for apiserver health ...
	I1018 17:50:10.464067   69488 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 17:50:10.483197   69488 system_pods.go:59] 26 kube-system pods found
	I1018 17:50:10.483289   69488 system_pods.go:61] "coredns-66bc5c9577-f6v2w" [a1fbdf00-9636-43a5-b1ed-a98bcacb5537] Running
	I1018 17:50:10.483312   69488 system_pods.go:61] "coredns-66bc5c9577-p7nbg" [9d361193-5b45-400e-8161-804fc30e7515] Running
	I1018 17:50:10.483343   69488 system_pods.go:61] "etcd-ha-181800" [3aafeb42-d09a-4b84-9739-e25adc3a4135] Running
	I1018 17:50:10.483363   69488 system_pods.go:61] "etcd-ha-181800-m02" [194d8d52-b9b6-43ae-8c1f-01b965d3ae96] Running
	I1018 17:50:10.483380   69488 system_pods.go:61] "etcd-ha-181800-m03" [f52cd0ee-6f99-49ba-8c4f-218b8d166fe2] Running
	I1018 17:50:10.483399   69488 system_pods.go:61] "kindnet-72mvm" [5edfc356-9d49-4895-b36a-06c2bd39155a] Running
	I1018 17:50:10.483417   69488 system_pods.go:61] "kindnet-86s8z" [6559ac9e-c73d-4d49-a0e1-87d630e5bec8] Running
	I1018 17:50:10.483439   69488 system_pods.go:61] "kindnet-88bv7" [3b3b9715-1e6e-4046-adae-f372381e068a] Running
	I1018 17:50:10.483466   69488 system_pods.go:61] "kindnet-9qbbw" [d1a305ed-4a0e-4ccc-90e0-04577ad4e5c4] Running
	I1018 17:50:10.483486   69488 system_pods.go:61] "kube-apiserver-ha-181800" [4966738e-d055-404d-82ad-0d3f23ef0337] Running
	I1018 17:50:10.483506   69488 system_pods.go:61] "kube-apiserver-ha-181800-m02" [344fc499-0c04-4f86-a919-3c2da1e7a1e6] Running
	I1018 17:50:10.483524   69488 system_pods.go:61] "kube-apiserver-ha-181800-m03" [ce72f944-adc2-46a9-a83c-dc75936c3e9c] Running
	I1018 17:50:10.483543   69488 system_pods.go:61] "kube-controller-manager-ha-181800" [9a4be61b-4ecc-46da-86a1-472b6da720b9] Running
	I1018 17:50:10.483573   69488 system_pods.go:61] "kube-controller-manager-ha-181800-m02" [6a519ce2-92dc-4003-8f1a-6d818fea6da3] Running
	I1018 17:50:10.483593   69488 system_pods.go:61] "kube-controller-manager-ha-181800-m03" [9d247c9d-37a0-4880-8b0a-1134ebb963ab] Running
	I1018 17:50:10.483612   69488 system_pods.go:61] "kube-proxy-dpwpn" [dfabd129-fc36-4d16-ab0f-0b9ecc015712] Running
	I1018 17:50:10.483630   69488 system_pods.go:61] "kube-proxy-fj4ww" [40c5681f-ad11-4e21-a852-5601e2a9fa6e] Running
	I1018 17:50:10.483648   69488 system_pods.go:61] "kube-proxy-qsqmb" [9e100b31-50e5-4d86-a234-0d6277009e98] Running
	I1018 17:50:10.483673   69488 system_pods.go:61] "kube-proxy-stgvm" [15b89226-91ae-478f-acfe-7841776b1377] Running
	I1018 17:50:10.483697   69488 system_pods.go:61] "kube-scheduler-ha-181800" [f4699386-754c-4fa2-8556-174d872d6825] Running
	I1018 17:50:10.483716   69488 system_pods.go:61] "kube-scheduler-ha-181800-m02" [565d55c5-9541-4ef9-a036-3d9ff03f0fa9] Running
	I1018 17:50:10.483733   69488 system_pods.go:61] "kube-scheduler-ha-181800-m03" [4f8687e4-3dbc-4c98-97a4-ab703b016798] Running
	I1018 17:50:10.483751   69488 system_pods.go:61] "kube-vip-ha-181800" [a947f5a9-6257-4ff0-9f73-2d720974668b] Running
	I1018 17:50:10.483784   69488 system_pods.go:61] "kube-vip-ha-181800-m02" [21258022-efed-42fb-b206-89ffcd8d3820] Running
	I1018 17:50:10.483812   69488 system_pods.go:61] "kube-vip-ha-181800-m03" [0087f776-5d07-4c43-906d-c63afc2cc349] Running
	I1018 17:50:10.483830   69488 system_pods.go:61] "storage-provisioner" [3c6521cd-8e1b-46aa-96a3-39e475e1426c] Running
	I1018 17:50:10.483848   69488 system_pods.go:74] duration metric: took 19.763103ms to wait for pod list to return data ...
	I1018 17:50:10.483877   69488 default_sa.go:34] waiting for default service account to be created ...
	I1018 17:50:10.493513   69488 default_sa.go:45] found service account: "default"
	I1018 17:50:10.493594   69488 default_sa.go:55] duration metric: took 9.697323ms for default service account to be created ...
	I1018 17:50:10.493625   69488 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 17:50:10.501353   69488 system_pods.go:86] 26 kube-system pods found
	I1018 17:50:10.501452   69488 system_pods.go:89] "coredns-66bc5c9577-f6v2w" [a1fbdf00-9636-43a5-b1ed-a98bcacb5537] Running
	I1018 17:50:10.501476   69488 system_pods.go:89] "coredns-66bc5c9577-p7nbg" [9d361193-5b45-400e-8161-804fc30e7515] Running
	I1018 17:50:10.501494   69488 system_pods.go:89] "etcd-ha-181800" [3aafeb42-d09a-4b84-9739-e25adc3a4135] Running
	I1018 17:50:10.501514   69488 system_pods.go:89] "etcd-ha-181800-m02" [194d8d52-b9b6-43ae-8c1f-01b965d3ae96] Running
	I1018 17:50:10.501540   69488 system_pods.go:89] "etcd-ha-181800-m03" [f52cd0ee-6f99-49ba-8c4f-218b8d166fe2] Running
	I1018 17:50:10.501560   69488 system_pods.go:89] "kindnet-72mvm" [5edfc356-9d49-4895-b36a-06c2bd39155a] Running
	I1018 17:50:10.501578   69488 system_pods.go:89] "kindnet-86s8z" [6559ac9e-c73d-4d49-a0e1-87d630e5bec8] Running
	I1018 17:50:10.501595   69488 system_pods.go:89] "kindnet-88bv7" [3b3b9715-1e6e-4046-adae-f372381e068a] Running
	I1018 17:50:10.501612   69488 system_pods.go:89] "kindnet-9qbbw" [d1a305ed-4a0e-4ccc-90e0-04577ad4e5c4] Running
	I1018 17:50:10.501639   69488 system_pods.go:89] "kube-apiserver-ha-181800" [4966738e-d055-404d-82ad-0d3f23ef0337] Running
	I1018 17:50:10.501660   69488 system_pods.go:89] "kube-apiserver-ha-181800-m02" [344fc499-0c04-4f86-a919-3c2da1e7a1e6] Running
	I1018 17:50:10.501677   69488 system_pods.go:89] "kube-apiserver-ha-181800-m03" [ce72f944-adc2-46a9-a83c-dc75936c3e9c] Running
	I1018 17:50:10.501694   69488 system_pods.go:89] "kube-controller-manager-ha-181800" [9a4be61b-4ecc-46da-86a1-472b6da720b9] Running
	I1018 17:50:10.501711   69488 system_pods.go:89] "kube-controller-manager-ha-181800-m02" [6a519ce2-92dc-4003-8f1a-6d818fea6da3] Running
	I1018 17:50:10.501737   69488 system_pods.go:89] "kube-controller-manager-ha-181800-m03" [9d247c9d-37a0-4880-8b0a-1134ebb963ab] Running
	I1018 17:50:10.501756   69488 system_pods.go:89] "kube-proxy-dpwpn" [dfabd129-fc36-4d16-ab0f-0b9ecc015712] Running
	I1018 17:50:10.501776   69488 system_pods.go:89] "kube-proxy-fj4ww" [40c5681f-ad11-4e21-a852-5601e2a9fa6e] Running
	I1018 17:50:10.501793   69488 system_pods.go:89] "kube-proxy-qsqmb" [9e100b31-50e5-4d86-a234-0d6277009e98] Running
	I1018 17:50:10.501809   69488 system_pods.go:89] "kube-proxy-stgvm" [15b89226-91ae-478f-acfe-7841776b1377] Running
	I1018 17:50:10.501836   69488 system_pods.go:89] "kube-scheduler-ha-181800" [f4699386-754c-4fa2-8556-174d872d6825] Running
	I1018 17:50:10.501855   69488 system_pods.go:89] "kube-scheduler-ha-181800-m02" [565d55c5-9541-4ef9-a036-3d9ff03f0fa9] Running
	I1018 17:50:10.501872   69488 system_pods.go:89] "kube-scheduler-ha-181800-m03" [4f8687e4-3dbc-4c98-97a4-ab703b016798] Running
	I1018 17:50:10.501889   69488 system_pods.go:89] "kube-vip-ha-181800" [a947f5a9-6257-4ff0-9f73-2d720974668b] Running
	I1018 17:50:10.501906   69488 system_pods.go:89] "kube-vip-ha-181800-m02" [21258022-efed-42fb-b206-89ffcd8d3820] Running
	I1018 17:50:10.501923   69488 system_pods.go:89] "kube-vip-ha-181800-m03" [0087f776-5d07-4c43-906d-c63afc2cc349] Running
	I1018 17:50:10.501939   69488 system_pods.go:89] "storage-provisioner" [3c6521cd-8e1b-46aa-96a3-39e475e1426c] Running
	I1018 17:50:10.501958   69488 system_pods.go:126] duration metric: took 8.313403ms to wait for k8s-apps to be running ...
	I1018 17:50:10.501982   69488 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 17:50:10.502072   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 17:50:10.521995   69488 system_svc.go:56] duration metric: took 20.005468ms WaitForService to wait for kubelet
	I1018 17:50:10.522064   69488 kubeadm.go:586] duration metric: took 10.458238282s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:50:10.522097   69488 node_conditions.go:102] verifying NodePressure condition ...
	I1018 17:50:10.529801   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:10.529839   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:10.529851   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:10.529856   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:10.529860   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:10.529864   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:10.529868   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:10.529873   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:10.529878   69488 node_conditions.go:105] duration metric: took 7.761413ms to run NodePressure ...
	I1018 17:50:10.529893   69488 start.go:241] waiting for startup goroutines ...
	I1018 17:50:10.529919   69488 start.go:255] writing updated cluster config ...
	I1018 17:50:10.533578   69488 out.go:203] 
	I1018 17:50:10.536806   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:10.536948   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:50:10.540446   69488 out.go:179] * Starting "ha-181800-m03" control-plane node in "ha-181800" cluster
	I1018 17:50:10.544213   69488 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:50:10.547247   69488 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:50:10.550234   69488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:50:10.550276   69488 cache.go:58] Caching tarball of preloaded images
	I1018 17:50:10.550383   69488 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:50:10.550399   69488 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:50:10.550572   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:50:10.550792   69488 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:50:10.581920   69488 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:50:10.581944   69488 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:50:10.581957   69488 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:50:10.581981   69488 start.go:360] acquireMachinesLock for ha-181800-m03: {Name:mk3bd15228a4ef4b7c016e23b190ad29deb5e3c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:50:10.582039   69488 start.go:364] duration metric: took 38.023µs to acquireMachinesLock for "ha-181800-m03"
	I1018 17:50:10.582062   69488 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:50:10.582068   69488 fix.go:54] fixHost starting: m03
	I1018 17:50:10.582331   69488 cli_runner.go:164] Run: docker container inspect ha-181800-m03 --format={{.State.Status}}
	I1018 17:50:10.604865   69488 fix.go:112] recreateIfNeeded on ha-181800-m03: state=Stopped err=<nil>
	W1018 17:50:10.604890   69488 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:50:10.607957   69488 out.go:252] * Restarting existing docker container for "ha-181800-m03" ...
	I1018 17:50:10.608050   69488 cli_runner.go:164] Run: docker start ha-181800-m03
	I1018 17:50:10.899418   69488 cli_runner.go:164] Run: docker container inspect ha-181800-m03 --format={{.State.Status}}
	I1018 17:50:10.926262   69488 kic.go:430] container "ha-181800-m03" state is running.
	I1018 17:50:10.926628   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m03
	I1018 17:50:10.950821   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:50:10.951066   69488 machine.go:93] provisionDockerMachine start ...
	I1018 17:50:10.951120   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:10.976987   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:10.977281   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1018 17:50:10.977290   69488 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:50:10.978264   69488 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 17:50:14.380761   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m03
	
	I1018 17:50:14.380788   69488 ubuntu.go:182] provisioning hostname "ha-181800-m03"
	I1018 17:50:14.380865   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:14.409115   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:14.409426   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1018 17:50:14.409441   69488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181800-m03 && echo "ha-181800-m03" | sudo tee /etc/hostname
	I1018 17:50:14.717264   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m03
	
	I1018 17:50:14.717353   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:14.739028   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:14.739335   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1018 17:50:14.739352   69488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181800-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181800-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181800-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:50:14.965850   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:50:14.965903   69488 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:50:14.965931   69488 ubuntu.go:190] setting up certificates
	I1018 17:50:14.965940   69488 provision.go:84] configureAuth start
	I1018 17:50:14.966014   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m03
	I1018 17:50:15.001400   69488 provision.go:143] copyHostCerts
	I1018 17:50:15.001447   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:50:15.001479   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:50:15.001492   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:50:15.001591   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:50:15.001685   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:50:15.001709   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:50:15.001717   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:50:15.001745   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:50:15.001793   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:50:15.001814   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:50:15.001822   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:50:15.001846   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:50:15.001898   69488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.ha-181800-m03 san=[127.0.0.1 192.168.49.4 ha-181800-m03 localhost minikube]
	I1018 17:50:15.478787   69488 provision.go:177] copyRemoteCerts
	I1018 17:50:15.478855   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:50:15.478897   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:15.499352   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m03/id_rsa Username:docker}
	I1018 17:50:15.670546   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 17:50:15.670610   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:50:15.737652   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 17:50:15.737722   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 17:50:15.785672   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 17:50:15.785736   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:50:15.819920   69488 provision.go:87] duration metric: took 853.956632ms to configureAuth
	I1018 17:50:15.819958   69488 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:50:15.820214   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:15.820332   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:15.865677   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:15.866025   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1018 17:50:15.866041   69488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:50:16.412687   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:50:16.412751   69488 machine.go:96] duration metric: took 5.461676033s to provisionDockerMachine
	I1018 17:50:16.412774   69488 start.go:293] postStartSetup for "ha-181800-m03" (driver="docker")
	I1018 17:50:16.412799   69488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:50:16.412889   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:50:16.413002   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:16.433582   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m03/id_rsa Username:docker}
	I1018 17:50:16.541794   69488 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:50:16.545653   69488 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:50:16.545679   69488 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:50:16.545690   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:50:16.545754   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:50:16.545831   69488 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:50:16.545837   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 17:50:16.545942   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 17:50:16.558126   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:50:16.579067   69488 start.go:296] duration metric: took 166.265226ms for postStartSetup
	I1018 17:50:16.579147   69488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:50:16.579196   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:16.607003   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m03/id_rsa Username:docker}
	I1018 17:50:16.710563   69488 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:50:16.715811   69488 fix.go:56] duration metric: took 6.133736189s for fixHost
	I1018 17:50:16.715839   69488 start.go:83] releasing machines lock for "ha-181800-m03", held for 6.133787135s
	I1018 17:50:16.715904   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m03
	I1018 17:50:16.738713   69488 out.go:179] * Found network options:
	I1018 17:50:16.742042   69488 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1018 17:50:16.745211   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:16.745257   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:16.745281   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:16.745291   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	I1018 17:50:16.745360   69488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:50:16.745415   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:16.745719   69488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:50:16.745787   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:16.786710   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m03/id_rsa Username:docker}
	I1018 17:50:16.789091   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m03/id_rsa Username:docker}
	I1018 17:50:17.000059   69488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:50:17.007334   69488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:50:17.007407   69488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:50:17.020749   69488 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:50:17.020771   69488 start.go:495] detecting cgroup driver to use...
	I1018 17:50:17.020801   69488 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:50:17.020860   69488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:50:17.040018   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:50:17.058499   69488 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:50:17.058565   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:50:17.088757   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:50:17.114857   69488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:50:17.279680   69488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:50:17.689048   69488 docker.go:234] disabling docker service ...
	I1018 17:50:17.689168   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:50:17.768854   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:50:17.797881   69488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:50:18.156314   69488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:50:18.369568   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:50:18.394137   69488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:50:18.428969   69488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:50:18.429103   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.447576   69488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:50:18.447692   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.482845   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.510376   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.531315   69488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:50:18.548495   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.563525   69488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.581424   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.594509   69488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:50:18.609129   69488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:50:18.621435   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:18.879315   69488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:50:19.151219   69488 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:50:19.151291   69488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:50:19.155163   69488 start.go:563] Will wait 60s for crictl version
	I1018 17:50:19.155231   69488 ssh_runner.go:195] Run: which crictl
	I1018 17:50:19.159144   69488 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:50:19.185150   69488 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:50:19.185237   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:50:19.215107   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:50:19.252641   69488 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:50:19.255663   69488 out.go:179]   - env NO_PROXY=192.168.49.2
	I1018 17:50:19.258473   69488 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1018 17:50:19.261365   69488 cli_runner.go:164] Run: docker network inspect ha-181800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:50:19.278013   69488 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:50:19.282046   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:50:19.291553   69488 mustload.go:65] Loading cluster: ha-181800
	I1018 17:50:19.291792   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:19.292044   69488 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:50:19.308345   69488 host.go:66] Checking if "ha-181800" exists ...
	I1018 17:50:19.308613   69488 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800 for IP: 192.168.49.4
	I1018 17:50:19.308629   69488 certs.go:195] generating shared ca certs ...
	I1018 17:50:19.308644   69488 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:50:19.308750   69488 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:50:19.308801   69488 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:50:19.308811   69488 certs.go:257] generating profile certs ...
	I1018 17:50:19.308888   69488 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key
	I1018 17:50:19.308994   69488 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.35e78fdb
	I1018 17:50:19.309039   69488 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key
	I1018 17:50:19.309051   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 17:50:19.309064   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 17:50:19.309079   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 17:50:19.309093   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 17:50:19.309106   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 17:50:19.309121   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 17:50:19.309132   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 17:50:19.309147   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 17:50:19.309202   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:50:19.309233   69488 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:50:19.309246   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:50:19.309272   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:50:19.309298   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:50:19.309353   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:50:19.309405   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:50:19.309436   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 17:50:19.309452   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:19.309465   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 17:50:19.309518   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:50:19.326970   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:50:19.425285   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1018 17:50:19.430205   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1018 17:50:19.438544   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1018 17:50:19.442194   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1018 17:50:19.450335   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1018 17:50:19.454272   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1018 17:50:19.462534   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1018 17:50:19.466318   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1018 17:50:19.475475   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1018 17:50:19.479138   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1018 17:50:19.487039   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1018 17:50:19.492406   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1018 17:50:19.511212   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:50:19.558261   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:50:19.590631   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:50:19.618816   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:50:19.644073   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 17:50:19.666879   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 17:50:19.688513   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 17:50:19.707989   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 17:50:19.736170   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:50:19.759883   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:50:19.781940   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:50:19.806805   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1018 17:50:19.820301   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1018 17:50:19.837237   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1018 17:50:19.852161   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1018 17:50:19.865774   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1018 17:50:19.879759   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1018 17:50:19.893543   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1018 17:50:19.907773   69488 ssh_runner.go:195] Run: openssl version
	I1018 17:50:19.914031   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:50:19.923464   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:50:19.928100   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:50:19.928198   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:50:19.970114   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:50:19.978890   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:50:19.987235   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:50:19.991041   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:50:19.991160   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:50:20.033052   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:50:20.042399   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:50:20.051218   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:20.055291   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:20.055383   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:20.097864   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:50:20.106870   69488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:50:20.111573   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 17:50:20.153811   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 17:50:20.195276   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 17:50:20.242865   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 17:50:20.284917   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 17:50:20.327528   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 17:50:20.380629   69488 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1018 17:50:20.380764   69488 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-181800-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:50:20.380810   69488 kube-vip.go:115] generating kube-vip config ...
	I1018 17:50:20.380884   69488 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 17:50:20.394557   69488 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:50:20.394614   69488 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 17:50:20.394671   69488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:50:20.404177   69488 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:50:20.404302   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1018 17:50:20.412251   69488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 17:50:20.425311   69488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:50:20.441214   69488 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 17:50:20.463677   69488 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 17:50:20.468015   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:50:20.478500   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:20.642164   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:50:20.673908   69488 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:50:20.674213   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:20.679253   69488 out.go:179] * Verifying Kubernetes components...
	I1018 17:50:20.682245   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:20.839086   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:50:20.854027   69488 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1018 17:50:20.854101   69488 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1018 17:50:20.854335   69488 node_ready.go:35] waiting up to 6m0s for node "ha-181800-m03" to be "Ready" ...
	W1018 17:50:22.857724   69488 node_ready.go:57] node "ha-181800-m03" has "Ready":"Unknown" status (will retry)
	W1018 17:50:24.858447   69488 node_ready.go:57] node "ha-181800-m03" has "Ready":"Unknown" status (will retry)
	W1018 17:50:26.858609   69488 node_ready.go:57] node "ha-181800-m03" has "Ready":"Unknown" status (will retry)
	W1018 17:50:29.359403   69488 node_ready.go:57] node "ha-181800-m03" has "Ready":"Unknown" status (will retry)
	W1018 17:50:31.859188   69488 node_ready.go:57] node "ha-181800-m03" has "Ready":"Unknown" status (will retry)
	W1018 17:50:34.358228   69488 node_ready.go:57] node "ha-181800-m03" has "Ready":"Unknown" status (will retry)
	I1018 17:50:34.857876   69488 node_ready.go:49] node "ha-181800-m03" is "Ready"
	I1018 17:50:34.857902   69488 node_ready.go:38] duration metric: took 14.003549338s for node "ha-181800-m03" to be "Ready" ...
	I1018 17:50:34.857914   69488 api_server.go:52] waiting for apiserver process to appear ...
	I1018 17:50:34.857973   69488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:50:34.869120   69488 api_server.go:72] duration metric: took 14.194796326s to wait for apiserver process to appear ...
	I1018 17:50:34.869149   69488 api_server.go:88] waiting for apiserver healthz status ...
	I1018 17:50:34.869170   69488 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 17:50:34.878933   69488 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 17:50:34.879871   69488 api_server.go:141] control plane version: v1.34.1
	I1018 17:50:34.879896   69488 api_server.go:131] duration metric: took 10.739864ms to wait for apiserver health ...
	I1018 17:50:34.879915   69488 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 17:50:34.886492   69488 system_pods.go:59] 26 kube-system pods found
	I1018 17:50:34.886536   69488 system_pods.go:61] "coredns-66bc5c9577-f6v2w" [a1fbdf00-9636-43a5-b1ed-a98bcacb5537] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:50:34.886578   69488 system_pods.go:61] "coredns-66bc5c9577-p7nbg" [9d361193-5b45-400e-8161-804fc30e7515] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:50:34.886593   69488 system_pods.go:61] "etcd-ha-181800" [3aafeb42-d09a-4b84-9739-e25adc3a4135] Running
	I1018 17:50:34.886598   69488 system_pods.go:61] "etcd-ha-181800-m02" [194d8d52-b9b6-43ae-8c1f-01b965d3ae96] Running
	I1018 17:50:34.886603   69488 system_pods.go:61] "etcd-ha-181800-m03" [f52cd0ee-6f99-49ba-8c4f-218b8d166fe2] Running
	I1018 17:50:34.886607   69488 system_pods.go:61] "kindnet-72mvm" [5edfc356-9d49-4895-b36a-06c2bd39155a] Running
	I1018 17:50:34.886622   69488 system_pods.go:61] "kindnet-86s8z" [6559ac9e-c73d-4d49-a0e1-87d630e5bec8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 17:50:34.886629   69488 system_pods.go:61] "kindnet-88bv7" [3b3b9715-1e6e-4046-adae-f372381e068a] Running
	I1018 17:50:34.886642   69488 system_pods.go:61] "kindnet-9qbbw" [d1a305ed-4a0e-4ccc-90e0-04577ad4e5c4] Running
	I1018 17:50:34.886646   69488 system_pods.go:61] "kube-apiserver-ha-181800" [4966738e-d055-404d-82ad-0d3f23ef0337] Running
	I1018 17:50:34.886650   69488 system_pods.go:61] "kube-apiserver-ha-181800-m02" [344fc499-0c04-4f86-a919-3c2da1e7a1e6] Running
	I1018 17:50:34.886654   69488 system_pods.go:61] "kube-apiserver-ha-181800-m03" [ce72f944-adc2-46a9-a83c-dc75936c3e9c] Running
	I1018 17:50:34.886659   69488 system_pods.go:61] "kube-controller-manager-ha-181800" [9a4be61b-4ecc-46da-86a1-472b6da720b9] Running
	I1018 17:50:34.886672   69488 system_pods.go:61] "kube-controller-manager-ha-181800-m02" [6a519ce2-92dc-4003-8f1a-6d818fea6da3] Running
	I1018 17:50:34.886679   69488 system_pods.go:61] "kube-controller-manager-ha-181800-m03" [9d247c9d-37a0-4880-8b0a-1134ebb963ab] Running
	I1018 17:50:34.886685   69488 system_pods.go:61] "kube-proxy-dpwpn" [dfabd129-fc36-4d16-ab0f-0b9ecc015712] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 17:50:34.886699   69488 system_pods.go:61] "kube-proxy-fj4ww" [40c5681f-ad11-4e21-a852-5601e2a9fa6e] Running
	I1018 17:50:34.886703   69488 system_pods.go:61] "kube-proxy-qsqmb" [9e100b31-50e5-4d86-a234-0d6277009e98] Running
	I1018 17:50:34.886707   69488 system_pods.go:61] "kube-proxy-stgvm" [15b89226-91ae-478f-acfe-7841776b1377] Running
	I1018 17:50:34.886714   69488 system_pods.go:61] "kube-scheduler-ha-181800" [f4699386-754c-4fa2-8556-174d872d6825] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 17:50:34.886723   69488 system_pods.go:61] "kube-scheduler-ha-181800-m02" [565d55c5-9541-4ef9-a036-3d9ff03f0fa9] Running
	I1018 17:50:34.886727   69488 system_pods.go:61] "kube-scheduler-ha-181800-m03" [4f8687e4-3dbc-4c98-97a4-ab703b016798] Running
	I1018 17:50:34.886732   69488 system_pods.go:61] "kube-vip-ha-181800" [a947f5a9-6257-4ff0-9f73-2d720974668b] Running
	I1018 17:50:34.886739   69488 system_pods.go:61] "kube-vip-ha-181800-m02" [21258022-efed-42fb-b206-89ffcd8d3820] Running
	I1018 17:50:34.886743   69488 system_pods.go:61] "kube-vip-ha-181800-m03" [0087f776-5d07-4c43-906d-c63afc2cc349] Running
	I1018 17:50:34.886747   69488 system_pods.go:61] "storage-provisioner" [3c6521cd-8e1b-46aa-96a3-39e475e1426c] Running
	I1018 17:50:34.886753   69488 system_pods.go:74] duration metric: took 6.831276ms to wait for pod list to return data ...
	I1018 17:50:34.886767   69488 default_sa.go:34] waiting for default service account to be created ...
	I1018 17:50:34.890059   69488 default_sa.go:45] found service account: "default"
	I1018 17:50:34.890090   69488 default_sa.go:55] duration metric: took 3.316408ms for default service account to be created ...
	I1018 17:50:34.890099   69488 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 17:50:34.899064   69488 system_pods.go:86] 26 kube-system pods found
	I1018 17:50:34.899114   69488 system_pods.go:89] "coredns-66bc5c9577-f6v2w" [a1fbdf00-9636-43a5-b1ed-a98bcacb5537] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:50:34.899126   69488 system_pods.go:89] "coredns-66bc5c9577-p7nbg" [9d361193-5b45-400e-8161-804fc30e7515] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:50:34.899135   69488 system_pods.go:89] "etcd-ha-181800" [3aafeb42-d09a-4b84-9739-e25adc3a4135] Running
	I1018 17:50:34.899145   69488 system_pods.go:89] "etcd-ha-181800-m02" [194d8d52-b9b6-43ae-8c1f-01b965d3ae96] Running
	I1018 17:50:34.899154   69488 system_pods.go:89] "etcd-ha-181800-m03" [f52cd0ee-6f99-49ba-8c4f-218b8d166fe2] Running
	I1018 17:50:34.899159   69488 system_pods.go:89] "kindnet-72mvm" [5edfc356-9d49-4895-b36a-06c2bd39155a] Running
	I1018 17:50:34.899172   69488 system_pods.go:89] "kindnet-86s8z" [6559ac9e-c73d-4d49-a0e1-87d630e5bec8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 17:50:34.899182   69488 system_pods.go:89] "kindnet-88bv7" [3b3b9715-1e6e-4046-adae-f372381e068a] Running
	I1018 17:50:34.899196   69488 system_pods.go:89] "kindnet-9qbbw" [d1a305ed-4a0e-4ccc-90e0-04577ad4e5c4] Running
	I1018 17:50:34.899202   69488 system_pods.go:89] "kube-apiserver-ha-181800" [4966738e-d055-404d-82ad-0d3f23ef0337] Running
	I1018 17:50:34.899213   69488 system_pods.go:89] "kube-apiserver-ha-181800-m02" [344fc499-0c04-4f86-a919-3c2da1e7a1e6] Running
	I1018 17:50:34.899223   69488 system_pods.go:89] "kube-apiserver-ha-181800-m03" [ce72f944-adc2-46a9-a83c-dc75936c3e9c] Running
	I1018 17:50:34.899228   69488 system_pods.go:89] "kube-controller-manager-ha-181800" [9a4be61b-4ecc-46da-86a1-472b6da720b9] Running
	I1018 17:50:34.899243   69488 system_pods.go:89] "kube-controller-manager-ha-181800-m02" [6a519ce2-92dc-4003-8f1a-6d818fea6da3] Running
	I1018 17:50:34.899249   69488 system_pods.go:89] "kube-controller-manager-ha-181800-m03" [9d247c9d-37a0-4880-8b0a-1134ebb963ab] Running
	I1018 17:50:34.899260   69488 system_pods.go:89] "kube-proxy-dpwpn" [dfabd129-fc36-4d16-ab0f-0b9ecc015712] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 17:50:34.899271   69488 system_pods.go:89] "kube-proxy-fj4ww" [40c5681f-ad11-4e21-a852-5601e2a9fa6e] Running
	I1018 17:50:34.899276   69488 system_pods.go:89] "kube-proxy-qsqmb" [9e100b31-50e5-4d86-a234-0d6277009e98] Running
	I1018 17:50:34.899281   69488 system_pods.go:89] "kube-proxy-stgvm" [15b89226-91ae-478f-acfe-7841776b1377] Running
	I1018 17:50:34.899294   69488 system_pods.go:89] "kube-scheduler-ha-181800" [f4699386-754c-4fa2-8556-174d872d6825] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 17:50:34.899303   69488 system_pods.go:89] "kube-scheduler-ha-181800-m02" [565d55c5-9541-4ef9-a036-3d9ff03f0fa9] Running
	I1018 17:50:34.899308   69488 system_pods.go:89] "kube-scheduler-ha-181800-m03" [4f8687e4-3dbc-4c98-97a4-ab703b016798] Running
	I1018 17:50:34.899312   69488 system_pods.go:89] "kube-vip-ha-181800" [a947f5a9-6257-4ff0-9f73-2d720974668b] Running
	I1018 17:50:34.899323   69488 system_pods.go:89] "kube-vip-ha-181800-m02" [21258022-efed-42fb-b206-89ffcd8d3820] Running
	I1018 17:50:34.899327   69488 system_pods.go:89] "kube-vip-ha-181800-m03" [0087f776-5d07-4c43-906d-c63afc2cc349] Running
	I1018 17:50:34.899331   69488 system_pods.go:89] "storage-provisioner" [3c6521cd-8e1b-46aa-96a3-39e475e1426c] Running
	I1018 17:50:34.899338   69488 system_pods.go:126] duration metric: took 9.233497ms to wait for k8s-apps to be running ...
	I1018 17:50:34.899350   69488 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 17:50:34.899417   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 17:50:34.917250   69488 system_svc.go:56] duration metric: took 17.889347ms WaitForService to wait for kubelet
	I1018 17:50:34.917280   69488 kubeadm.go:586] duration metric: took 14.242961018s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:50:34.917312   69488 node_conditions.go:102] verifying NodePressure condition ...
	I1018 17:50:34.921584   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:34.921618   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:34.921629   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:34.921635   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:34.921640   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:34.921644   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:34.921648   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:34.921652   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:34.921657   69488 node_conditions.go:105] duration metric: took 4.33997ms to run NodePressure ...
	I1018 17:50:34.921672   69488 start.go:241] waiting for startup goroutines ...
	I1018 17:50:34.921695   69488 start.go:255] writing updated cluster config ...
	I1018 17:50:34.925146   69488 out.go:203] 
	I1018 17:50:34.928178   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:34.928377   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:50:34.931719   69488 out.go:179] * Starting "ha-181800-m04" worker node in "ha-181800" cluster
	I1018 17:50:34.934625   69488 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:50:34.937723   69488 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:50:34.940621   69488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:50:34.940656   69488 cache.go:58] Caching tarball of preloaded images
	I1018 17:50:34.940709   69488 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:50:34.940775   69488 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:50:34.940787   69488 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:50:34.940923   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:50:34.962521   69488 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:50:34.962544   69488 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:50:34.962563   69488 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:50:34.962587   69488 start.go:360] acquireMachinesLock for ha-181800-m04: {Name:mkde4f18de8924439f6b0cc4435fbaf784c3faa2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:50:34.962654   69488 start.go:364] duration metric: took 47.016µs to acquireMachinesLock for "ha-181800-m04"
	I1018 17:50:34.962676   69488 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:50:34.962691   69488 fix.go:54] fixHost starting: m04
	I1018 17:50:34.962948   69488 cli_runner.go:164] Run: docker container inspect ha-181800-m04 --format={{.State.Status}}
	I1018 17:50:34.980810   69488 fix.go:112] recreateIfNeeded on ha-181800-m04: state=Stopped err=<nil>
	W1018 17:50:34.980838   69488 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:50:34.984164   69488 out.go:252] * Restarting existing docker container for "ha-181800-m04" ...
	I1018 17:50:34.984251   69488 cli_runner.go:164] Run: docker start ha-181800-m04
	I1018 17:50:35.315737   69488 cli_runner.go:164] Run: docker container inspect ha-181800-m04 --format={{.State.Status}}
	I1018 17:50:35.337160   69488 kic.go:430] container "ha-181800-m04" state is running.
	I1018 17:50:35.337590   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m04
	I1018 17:50:35.363433   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:50:35.363682   69488 machine.go:93] provisionDockerMachine start ...
	I1018 17:50:35.363737   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:35.394986   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:35.395304   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1018 17:50:35.395315   69488 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:50:35.396115   69488 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 17:50:38.582281   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m04
	
	I1018 17:50:38.582366   69488 ubuntu.go:182] provisioning hostname "ha-181800-m04"
	I1018 17:50:38.582470   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:38.612842   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:38.613162   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1018 17:50:38.613175   69488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181800-m04 && echo "ha-181800-m04" | sudo tee /etc/hostname
	I1018 17:50:38.824220   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m04
	
	I1018 17:50:38.824341   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:38.867678   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:38.867969   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1018 17:50:38.867985   69488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181800-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181800-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181800-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:50:39.054604   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:50:39.054689   69488 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:50:39.054718   69488 ubuntu.go:190] setting up certificates
	I1018 17:50:39.054753   69488 provision.go:84] configureAuth start
	I1018 17:50:39.054834   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m04
	I1018 17:50:39.086058   69488 provision.go:143] copyHostCerts
	I1018 17:50:39.086092   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:50:39.086123   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:50:39.086130   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:50:39.086205   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:50:39.086277   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:50:39.086294   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:50:39.086298   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:50:39.086323   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:50:39.086360   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:50:39.086376   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:50:39.086380   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:50:39.086403   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:50:39.086448   69488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.ha-181800-m04 san=[127.0.0.1 192.168.49.5 ha-181800-m04 localhost minikube]
	I1018 17:50:39.468879   69488 provision.go:177] copyRemoteCerts
	I1018 17:50:39.469042   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:50:39.469105   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:39.488386   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m04/id_rsa Username:docker}
	I1018 17:50:39.624142   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 17:50:39.624201   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:50:39.661469   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 17:50:39.661533   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 17:50:39.687551   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 17:50:39.687610   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:50:39.714808   69488 provision.go:87] duration metric: took 660.019137ms to configureAuth
	I1018 17:50:39.714833   69488 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:50:39.715059   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:39.715179   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:39.744352   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:39.744665   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1018 17:50:39.744680   69488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:50:40.169343   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:50:40.169451   69488 machine.go:96] duration metric: took 4.805759657s to provisionDockerMachine
	I1018 17:50:40.169476   69488 start.go:293] postStartSetup for "ha-181800-m04" (driver="docker")
	I1018 17:50:40.169509   69488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:50:40.169593   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:50:40.169660   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:40.199327   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m04/id_rsa Username:docker}
	I1018 17:50:40.309268   69488 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:50:40.313860   69488 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:50:40.313893   69488 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:50:40.313903   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:50:40.313963   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:50:40.314046   69488 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:50:40.314057   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 17:50:40.314164   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 17:50:40.322086   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:50:40.345649   69488 start.go:296] duration metric: took 176.137258ms for postStartSetup
	I1018 17:50:40.345726   69488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:50:40.345765   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:40.367346   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m04/id_rsa Username:docker}
	I1018 17:50:40.476066   69488 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:50:40.481571   69488 fix.go:56] duration metric: took 5.518874256s for fixHost
	I1018 17:50:40.481594   69488 start.go:83] releasing machines lock for "ha-181800-m04", held for 5.518929354s
	I1018 17:50:40.481667   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m04
	I1018 17:50:40.518678   69488 out.go:179] * Found network options:
	I1018 17:50:40.522829   69488 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1018 17:50:40.526545   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:40.526576   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:40.526587   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:40.526609   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:40.526619   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:40.526628   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	I1018 17:50:40.526702   69488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:50:40.526739   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:40.526991   69488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:50:40.527047   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:40.564877   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m04/id_rsa Username:docker}
	I1018 17:50:40.572778   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m04/id_rsa Username:docker}
	I1018 17:50:40.812088   69488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:50:40.818560   69488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:50:40.818643   69488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:50:40.827770   69488 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:50:40.827794   69488 start.go:495] detecting cgroup driver to use...
	I1018 17:50:40.827830   69488 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:50:40.827881   69488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:50:40.844762   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:50:40.859855   69488 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:50:40.859920   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:50:40.877123   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:50:40.901442   69488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:50:41.039508   69488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:50:41.185848   69488 docker.go:234] disabling docker service ...
	I1018 17:50:41.185936   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:50:41.204077   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:50:41.219382   69488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:50:41.421847   69488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:50:41.682651   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:50:41.704546   69488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:50:41.722306   69488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:50:41.722376   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.737444   69488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:50:41.737564   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.753240   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.765254   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.778891   69488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:50:41.788840   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.799676   69488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.810022   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.820591   69488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:50:41.828788   69488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:50:41.838483   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:41.972124   69488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:50:42.178891   69488 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:50:42.178980   69488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:50:42.184242   69488 start.go:563] Will wait 60s for crictl version
	I1018 17:50:42.184331   69488 ssh_runner.go:195] Run: which crictl
	I1018 17:50:42.191980   69488 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:50:42.224462   69488 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:50:42.224630   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:50:42.261636   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:50:42.307376   69488 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:50:42.310676   69488 out.go:179]   - env NO_PROXY=192.168.49.2
	I1018 17:50:42.313598   69488 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1018 17:50:42.316600   69488 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1018 17:50:42.319690   69488 cli_runner.go:164] Run: docker network inspect ha-181800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:50:42.337639   69488 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:50:42.341794   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:50:42.354387   69488 mustload.go:65] Loading cluster: ha-181800
	I1018 17:50:42.354632   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:42.354880   69488 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:50:42.375574   69488 host.go:66] Checking if "ha-181800" exists ...
	I1018 17:50:42.375851   69488 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800 for IP: 192.168.49.5
	I1018 17:50:42.375865   69488 certs.go:195] generating shared ca certs ...
	I1018 17:50:42.375878   69488 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:50:42.375994   69488 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:50:42.376039   69488 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:50:42.376053   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 17:50:42.376065   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 17:50:42.376082   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 17:50:42.376099   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 17:50:42.376158   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:50:42.376191   69488 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:50:42.376202   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:50:42.376227   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:50:42.376253   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:50:42.376280   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:50:42.376328   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:50:42.376359   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:42.376376   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 17:50:42.376390   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 17:50:42.376442   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:50:42.395447   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:50:42.416556   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:50:42.438126   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:50:42.461131   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:50:42.491460   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:50:42.516977   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:50:42.546320   69488 ssh_runner.go:195] Run: openssl version
	I1018 17:50:42.554579   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:50:42.566626   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:42.570900   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:42.570969   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:42.623862   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:50:42.634866   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:50:42.645108   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:50:42.655323   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:50:42.655394   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:50:42.704646   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:50:42.713644   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:50:42.722573   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:50:42.726769   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:50:42.726843   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:50:42.784245   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:50:42.792405   69488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:50:42.803513   69488 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 17:50:42.803579   69488 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.34.1 crio false true} ...
	I1018 17:50:42.803680   69488 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-181800-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:50:42.803759   69488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:50:42.812894   69488 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:50:42.813002   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1018 17:50:42.821266   69488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 17:50:42.839760   69488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:50:42.859184   69488 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 17:50:42.864035   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:50:42.875123   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:43.006572   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:50:43.022917   69488 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1018 17:50:43.023313   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:43.026393   69488 out.go:179] * Verifying Kubernetes components...
	I1018 17:50:43.029360   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:43.176018   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:50:43.195799   69488 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1018 17:50:43.195926   69488 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1018 17:50:43.196200   69488 node_ready.go:35] waiting up to 6m0s for node "ha-181800-m04" to be "Ready" ...
	W1018 17:50:45.201538   69488 node_ready.go:57] node "ha-181800-m04" has "Ready":"Unknown" status (will retry)
	W1018 17:50:47.702556   69488 node_ready.go:57] node "ha-181800-m04" has "Ready":"Unknown" status (will retry)
	W1018 17:50:50.201440   69488 node_ready.go:57] node "ha-181800-m04" has "Ready":"Unknown" status (will retry)
	I1018 17:50:50.700371   69488 node_ready.go:49] node "ha-181800-m04" is "Ready"
	I1018 17:50:50.700396   69488 node_ready.go:38] duration metric: took 7.50415906s for node "ha-181800-m04" to be "Ready" ...
	I1018 17:50:50.700408   69488 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 17:50:50.700467   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 17:50:50.718400   69488 system_svc.go:56] duration metric: took 17.984135ms WaitForService to wait for kubelet
	I1018 17:50:50.718432   69488 kubeadm.go:586] duration metric: took 7.695467215s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:50:50.718449   69488 node_conditions.go:102] verifying NodePressure condition ...
	I1018 17:50:50.722731   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:50.722761   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:50.722774   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:50.722779   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:50.722783   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:50.722787   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:50.722791   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:50.722795   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:50.722799   69488 node_conditions.go:105] duration metric: took 4.345599ms to run NodePressure ...
	I1018 17:50:50.722811   69488 start.go:241] waiting for startup goroutines ...
	I1018 17:50:50.722837   69488 start.go:255] writing updated cluster config ...
	I1018 17:50:50.723159   69488 ssh_runner.go:195] Run: rm -f paused
	I1018 17:50:50.727229   69488 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 17:50:50.727747   69488 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 17:50:50.750070   69488 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-f6v2w" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 17:50:52.756554   69488 pod_ready.go:104] pod "coredns-66bc5c9577-f6v2w" is not "Ready", error: <nil>
	W1018 17:50:54.757224   69488 pod_ready.go:104] pod "coredns-66bc5c9577-f6v2w" is not "Ready", error: <nil>
	I1018 17:50:55.872324   69488 pod_ready.go:94] pod "coredns-66bc5c9577-f6v2w" is "Ready"
	I1018 17:50:55.872348   69488 pod_ready.go:86] duration metric: took 5.122247372s for pod "coredns-66bc5c9577-f6v2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.872359   69488 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p7nbg" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.891895   69488 pod_ready.go:94] pod "coredns-66bc5c9577-p7nbg" is "Ready"
	I1018 17:50:55.891959   69488 pod_ready.go:86] duration metric: took 19.593189ms for pod "coredns-66bc5c9577-p7nbg" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.900138   69488 pod_ready.go:83] waiting for pod "etcd-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.913638   69488 pod_ready.go:94] pod "etcd-ha-181800" is "Ready"
	I1018 17:50:55.913660   69488 pod_ready.go:86] duration metric: took 13.499842ms for pod "etcd-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.913670   69488 pod_ready.go:83] waiting for pod "etcd-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.920519   69488 pod_ready.go:94] pod "etcd-ha-181800-m02" is "Ready"
	I1018 17:50:55.920596   69488 pod_ready.go:86] duration metric: took 6.91899ms for pod "etcd-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.920619   69488 pod_ready.go:83] waiting for pod "etcd-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.954930   69488 pod_ready.go:94] pod "etcd-ha-181800-m03" is "Ready"
	I1018 17:50:55.955010   69488 pod_ready.go:86] duration metric: took 34.368453ms for pod "etcd-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:56.150428   69488 request.go:683] "Waited before sending request" delay="195.256268ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1018 17:50:56.154502   69488 pod_ready.go:83] waiting for pod "kube-apiserver-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:56.350745   69488 request.go:683] "Waited before sending request" delay="196.132391ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181800"
	I1018 17:50:56.551187   69488 request.go:683] "Waited before sending request" delay="197.298856ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800"
	I1018 17:50:56.554146   69488 pod_ready.go:94] pod "kube-apiserver-ha-181800" is "Ready"
	I1018 17:50:56.554177   69488 pod_ready.go:86] duration metric: took 399.650322ms for pod "kube-apiserver-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:56.554188   69488 pod_ready.go:83] waiting for pod "kube-apiserver-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:56.750528   69488 request.go:683] "Waited before sending request" delay="196.269246ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181800-m02"
	I1018 17:50:56.951191   69488 request.go:683] "Waited before sending request" delay="191.312029ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m02"
	I1018 17:50:56.954528   69488 pod_ready.go:94] pod "kube-apiserver-ha-181800-m02" is "Ready"
	I1018 17:50:56.954555   69488 pod_ready.go:86] duration metric: took 400.360633ms for pod "kube-apiserver-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:56.954567   69488 pod_ready.go:83] waiting for pod "kube-apiserver-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:57.150777   69488 request.go:683] "Waited before sending request" delay="196.132408ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181800-m03"
	I1018 17:50:57.350632   69488 request.go:683] "Waited before sending request" delay="196.3256ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m03"
	I1018 17:50:57.354249   69488 pod_ready.go:94] pod "kube-apiserver-ha-181800-m03" is "Ready"
	I1018 17:50:57.354277   69488 pod_ready.go:86] duration metric: took 399.70318ms for pod "kube-apiserver-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:57.550692   69488 request.go:683] "Waited before sending request" delay="196.326346ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1018 17:50:57.554682   69488 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:57.750932   69488 request.go:683] "Waited before sending request" delay="196.156235ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181800"
	I1018 17:50:57.951083   69488 request.go:683] "Waited before sending request" delay="179.305539ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800"
	I1018 17:50:57.954373   69488 pod_ready.go:94] pod "kube-controller-manager-ha-181800" is "Ready"
	I1018 17:50:57.954402   69488 pod_ready.go:86] duration metric: took 399.688608ms for pod "kube-controller-manager-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:57.954412   69488 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:58.150687   69488 request.go:683] "Waited before sending request" delay="196.203982ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181800-m02"
	I1018 17:50:58.351259   69488 request.go:683] "Waited before sending request" delay="197.229423ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m02"
	I1018 17:50:58.354427   69488 pod_ready.go:94] pod "kube-controller-manager-ha-181800-m02" is "Ready"
	I1018 17:50:58.354451   69488 pod_ready.go:86] duration metric: took 400.032752ms for pod "kube-controller-manager-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:58.354461   69488 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:58.550867   69488 request.go:683] "Waited before sending request" delay="196.323713ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181800-m03"
	I1018 17:50:58.751164   69488 request.go:683] "Waited before sending request" delay="196.337531ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m03"
	I1018 17:50:58.754290   69488 pod_ready.go:94] pod "kube-controller-manager-ha-181800-m03" is "Ready"
	I1018 17:50:58.754318   69488 pod_ready.go:86] duration metric: took 399.850398ms for pod "kube-controller-manager-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:58.950697   69488 request.go:683] "Waited before sending request" delay="196.290137ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1018 17:50:58.954553   69488 pod_ready.go:83] waiting for pod "kube-proxy-dpwpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:59.150998   69488 request.go:683] "Waited before sending request" delay="196.346368ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dpwpn"
	I1018 17:50:59.350617   69488 request.go:683] "Waited before sending request" delay="195.289755ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m02"
	I1018 17:50:59.353848   69488 pod_ready.go:94] pod "kube-proxy-dpwpn" is "Ready"
	I1018 17:50:59.353878   69488 pod_ready.go:86] duration metric: took 399.293025ms for pod "kube-proxy-dpwpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:59.353888   69488 pod_ready.go:83] waiting for pod "kube-proxy-fj4ww" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:59.550367   69488 request.go:683] "Waited before sending request" delay="196.374503ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fj4ww"
	I1018 17:50:59.751156   69488 request.go:683] "Waited before sending request" delay="197.148429ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m04"
	I1018 17:50:59.754407   69488 pod_ready.go:94] pod "kube-proxy-fj4ww" is "Ready"
	I1018 17:50:59.754437   69488 pod_ready.go:86] duration metric: took 400.541386ms for pod "kube-proxy-fj4ww" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:59.754446   69488 pod_ready.go:83] waiting for pod "kube-proxy-qsqmb" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:59.950755   69488 request.go:683] "Waited before sending request" delay="196.237656ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qsqmb"
	I1018 17:51:00.158458   69488 request.go:683] "Waited before sending request" delay="204.154018ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m03"
	I1018 17:51:00.170490   69488 pod_ready.go:94] pod "kube-proxy-qsqmb" is "Ready"
	I1018 17:51:00.170526   69488 pod_ready.go:86] duration metric: took 416.072575ms for pod "kube-proxy-qsqmb" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:00.170537   69488 pod_ready.go:83] waiting for pod "kube-proxy-stgvm" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:00.350837   69488 request.go:683] "Waited before sending request" delay="180.202158ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-stgvm"
	I1018 17:51:00.550600   69488 request.go:683] "Waited before sending request" delay="195.396062ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800"
	I1018 17:51:00.553989   69488 pod_ready.go:94] pod "kube-proxy-stgvm" is "Ready"
	I1018 17:51:00.554026   69488 pod_ready.go:86] duration metric: took 383.481925ms for pod "kube-proxy-stgvm" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:00.750322   69488 request.go:683] "Waited before sending request" delay="196.164105ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1018 17:51:00.754581   69488 pod_ready.go:83] waiting for pod "kube-scheduler-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:00.951090   69488 request.go:683] "Waited before sending request" delay="196.343135ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181800"
	I1018 17:51:01.151207   69488 request.go:683] "Waited before sending request" delay="196.368472ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800"
	I1018 17:51:01.154780   69488 pod_ready.go:94] pod "kube-scheduler-ha-181800" is "Ready"
	I1018 17:51:01.154809   69488 pod_ready.go:86] duration metric: took 400.156865ms for pod "kube-scheduler-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:01.154820   69488 pod_ready.go:83] waiting for pod "kube-scheduler-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:01.351014   69488 request.go:683] "Waited before sending request" delay="196.125229ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181800-m02"
	I1018 17:51:01.550334   69488 request.go:683] "Waited before sending request" delay="195.254374ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m02"
	I1018 17:51:01.553462   69488 pod_ready.go:94] pod "kube-scheduler-ha-181800-m02" is "Ready"
	I1018 17:51:01.553533   69488 pod_ready.go:86] duration metric: took 398.706213ms for pod "kube-scheduler-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:01.553558   69488 pod_ready.go:83] waiting for pod "kube-scheduler-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:01.750793   69488 request.go:683] "Waited before sending request" delay="197.139116ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181800-m03"
	I1018 17:51:01.951100   69488 request.go:683] "Waited before sending request" delay="196.302232ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m03"
	I1018 17:51:01.954435   69488 pod_ready.go:94] pod "kube-scheduler-ha-181800-m03" is "Ready"
	I1018 17:51:01.954463   69488 pod_ready.go:86] duration metric: took 400.885736ms for pod "kube-scheduler-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:01.954476   69488 pod_ready.go:40] duration metric: took 11.227212191s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 17:51:02.019798   69488 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 17:51:02.023234   69488 out.go:179] * Done! kubectl is now configured to use "ha-181800" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.572124206Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3818bf02-e1ec-45e5-8db2-98e9f6e8000a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.573451845Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=bdb883a0-d1f7-44fb-bec3-c90a1d2ecb55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.573727681Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.584989537Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.585193183Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/87a35d3c6fccfe095ac3771dcbde81fc5df65bc9200469d9386fd64ba3708913/merged/etc/passwd: no such file or directory"
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.585221163Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/87a35d3c6fccfe095ac3771dcbde81fc5df65bc9200469d9386fd64ba3708913/merged/etc/group: no such file or directory"
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.585494192Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.609702849Z" level=info msg="Created container 3955a976d16cdd5db102930c28bfc2c48f3fd22d0d8f4186e30edecd860f23fd: kube-system/storage-provisioner/storage-provisioner" id=bdb883a0-d1f7-44fb-bec3-c90a1d2ecb55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.610857892Z" level=info msg="Starting container: 3955a976d16cdd5db102930c28bfc2c48f3fd22d0d8f4186e30edecd860f23fd" id=4f969c9f-8845-4412-b24f-e780eb6068e8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.615041848Z" level=info msg="Started container" PID=1488 containerID=3955a976d16cdd5db102930c28bfc2c48f3fd22d0d8f4186e30edecd860f23fd description=kube-system/storage-provisioner/storage-provisioner id=4f969c9f-8845-4412-b24f-e780eb6068e8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9d76fad66ab674fdb6d96a586ff07b63771e9f80ffb0da6d960f75270994737e
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.473504065Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.479286252Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.479449553Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.479659115Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.500865649Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.502400176Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.502551702Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.511806492Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.511960258Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.51203262Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.515388889Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.515422391Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.515444882Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.526060264Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.526097122Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	3955a976d16cd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   23 seconds ago       Running             storage-provisioner       3                   9d76fad66ab67       storage-provisioner                 kube-system
	b70649f38d4c7       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   54 seconds ago       Running             busybox                   2                   2d6e6e05d930c       busybox-7b57f96db7-fbwpv            default
	244a77fe1563d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   54 seconds ago       Running             coredns                   2                   ac0ef71240719       coredns-66bc5c9577-p7nbg            kube-system
	45c33b76be4e1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   54 seconds ago       Running             kindnet-cni               2                   0e97ce88bd2d3       kindnet-72mvm                       kube-system
	8aea864f19933       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   54 seconds ago       Running             kube-proxy                2                   c1b0887367928       kube-proxy-stgvm                    kube-system
	6d80af764ee06       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   55 seconds ago       Running             coredns                   2                   ed23b1fbdbbb3       coredns-66bc5c9577-f6v2w            kube-system
	f2f15c809753a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   55 seconds ago       Exited              storage-provisioner       2                   9d76fad66ab67       storage-provisioner                 kube-system
	4cff6e37b85af       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   55 seconds ago       Running             kube-controller-manager   8                   c14a7cc20dbd7       kube-controller-manager-ha-181800   kube-system
	787ba7d1db588       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Running             kube-apiserver            8                   aedac42fff114       kube-apiserver-ha-181800            kube-system
	bd6f9d7be6037       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   7                   c14a7cc20dbd7       kube-controller-manager-ha-181800   kube-system
	7df0159a16497       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            7                   aedac42fff114       kube-apiserver-ha-181800            kube-system
	8d49f8f056288       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   2 minutes ago        Running             etcd                      2                   c5458ae9aa01d       etcd-ha-181800                      kube-system
	42139c5070f82       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   2 minutes ago        Running             kube-vip                  1                   ac5de0631c6c9       kube-vip-ha-181800                  kube-system
	fb83e2f9880f4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   2 minutes ago        Running             kube-scheduler            2                   042db5c7b2fa5       kube-scheduler-ha-181800            kube-system
	
	
	==> coredns [244a77fe1563d266b1c09476ad0f3463ffeb31f96c85ba703ffe04a24a967497] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42812 - 40298 "HINFO IN 6519948929031597716.8341788919287889456. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016440056s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [6d80af764ee0602bdd0407c66fcc9de24c8b7b254f4ce667725e048906d15a87] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35970 - 34760 "HINFO IN 4620377952315927478.2937315152384107880. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029628682s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-181800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T17_33_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:33:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:51:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 17:50:10 +0000   Sat, 18 Oct 2025 17:33:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 17:50:10 +0000   Sat, 18 Oct 2025 17:33:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 17:50:10 +0000   Sat, 18 Oct 2025 17:33:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 17:50:10 +0000   Sat, 18 Oct 2025 17:34:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-181800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                7dc9b150-98ed-4d4d-b680-5759a1e067a9
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-fbwpv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-66bc5c9577-f6v2w             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 coredns-66bc5c9577-p7nbg             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 etcd-ha-181800                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-72mvm                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-ha-181800             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-181800    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-stgvm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-181800             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-181800                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 52s                    kube-proxy       
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 9m2s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)      kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)      kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x8 over 17m)      kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  17m                    kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m                    kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m                    kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   NodeReady                17m                    kubelet          Node ha-181800 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)      kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           9m21s                  node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   NodeHasSufficientMemory  2m51s (x8 over 2m51s)  kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   Starting                 2m51s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m51s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m51s (x8 over 2m51s)  kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m51s (x8 over 2m51s)  kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   RegisteredNode           51s                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   RegisteredNode           27s                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	
	
	Name:               ha-181800-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T17_34_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:34:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:51:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 17:51:00 +0000   Sat, 18 Oct 2025 17:50:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 17:51:00 +0000   Sat, 18 Oct 2025 17:50:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 17:51:00 +0000   Sat, 18 Oct 2025 17:50:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 17:51:00 +0000   Sat, 18 Oct 2025 17:50:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-181800-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                b2dd8f24-78e0-4eba-8b0c-b12412f7af7d
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-cp9q6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 etcd-ha-181800-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-86s8z                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-ha-181800-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-181800-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-dpwpn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-181800-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-181800-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 23s                    kube-proxy       
	  Normal   RegisteredNode           17m                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-181800-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  13m (x9 over 13m)      kubelet          Node ha-181800-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-181800-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeNotReady             13m                    node-controller  Node ha-181800-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        12m                    kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           11m                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           9m21s                  node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   NodeNotReady             8m31s                  node-controller  Node ha-181800-m02 status is now: NodeNotReady
	  Warning  CgroupV1                 2m49s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m49s (x8 over 2m49s)  kubelet          Node ha-181800-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m49s (x8 over 2m49s)  kubelet          Node ha-181800-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s (x8 over 2m49s)  kubelet          Node ha-181800-m02 status is now: NodeHasSufficientPID
	  Warning  ContainerGCFailed        109s                   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           56s                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           51s                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           27s                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	
	
	Name:               ha-181800-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T17_35_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:35:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:51:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 17:50:34 +0000   Sat, 18 Oct 2025 17:50:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 17:50:34 +0000   Sat, 18 Oct 2025 17:50:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 17:50:34 +0000   Sat, 18 Oct 2025 17:50:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 17:50:34 +0000   Sat, 18 Oct 2025 17:50:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-181800-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                4a1abf8a-63a3-4737-81ec-1878616c489b
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-lzcbm                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 etcd-ha-181800-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-9qbbw                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-181800-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-181800-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-qsqmb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-181800-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-181800-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 30s                kube-proxy       
	  Normal   RegisteredNode           15m                node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   RegisteredNode           15m                node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   RegisteredNode           15m                node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   RegisteredNode           9m21s              node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   NodeNotReady             8m31s              node-controller  Node ha-181800-m03 status is now: NodeNotReady
	  Normal   RegisteredNode           56s                node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node ha-181800-m03 status is now: NodeHasSufficientMemory
	  Normal   Starting                 55s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 55s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node ha-181800-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     55s (x8 over 55s)  kubelet          Node ha-181800-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   RegisteredNode           27s                node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	
	
	Name:               ha-181800-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T17_36_11_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:36:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:51:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 17:50:50 +0000   Sat, 18 Oct 2025 17:50:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 17:50:50 +0000   Sat, 18 Oct 2025 17:50:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 17:50:50 +0000   Sat, 18 Oct 2025 17:50:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 17:50:50 +0000   Sat, 18 Oct 2025 17:50:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-181800-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                afc79373-b3a1-4495-8f28-5c3685ad131e
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-88bv7       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-proxy-fj4ww    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11s                kube-proxy       
	  Normal   Starting                 14m                kube-proxy       
	  Warning  CgroupV1                 14m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     14m (x3 over 14m)  kubelet          Node ha-181800-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x3 over 14m)  kubelet          Node ha-181800-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m (x3 over 14m)  kubelet          Node ha-181800-m04 status is now: NodeHasSufficientMemory
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           14m                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   NodeReady                14m                kubelet          Node ha-181800-m04 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           9m22s              node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   NodeNotReady             8m32s              node-controller  Node ha-181800-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           57s                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           52s                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   Starting                 32s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 32s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  29s (x8 over 32s)  kubelet          Node ha-181800-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    29s (x8 over 32s)  kubelet          Node ha-181800-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     29s (x8 over 32s)  kubelet          Node ha-181800-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           28s                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	
	
	==> dmesg <==
	[Oct18 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014995] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035776] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.808632] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.418900] kauditd_printk_skb: 36 callbacks suppressed
	[Oct18 17:12] overlayfs: idmapped layers are currently not supported
	[  +0.082393] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct18 17:18] overlayfs: idmapped layers are currently not supported
	[Oct18 17:19] overlayfs: idmapped layers are currently not supported
	[Oct18 17:33] overlayfs: idmapped layers are currently not supported
	[ +35.716082] overlayfs: idmapped layers are currently not supported
	[Oct18 17:35] overlayfs: idmapped layers are currently not supported
	[Oct18 17:36] overlayfs: idmapped layers are currently not supported
	[Oct18 17:37] overlayfs: idmapped layers are currently not supported
	[Oct18 17:39] overlayfs: idmapped layers are currently not supported
	[  +3.088699] overlayfs: idmapped layers are currently not supported
	[Oct18 17:48] overlayfs: idmapped layers are currently not supported
	[  +2.594489] overlayfs: idmapped layers are currently not supported
	[Oct18 17:50] overlayfs: idmapped layers are currently not supported
	[ +42.240353] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8d49f8f05628805a90b3d99b19810fe13d13747bb11c8daf730344aef4d339f6] <==
	{"level":"warn","ts":"2025-10-18T17:50:18.925210Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7"}
	{"level":"warn","ts":"2025-10-18T17:50:18.926176Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","error":"unexpected EOF"}
	{"level":"warn","ts":"2025-10-18T17:50:18.926413Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7","error":"unexpected EOF"}
	{"level":"warn","ts":"2025-10-18T17:50:19.134085Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7"}
	{"level":"warn","ts":"2025-10-18T17:50:20.526420Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"99f9e9c79f233aa7","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-18T17:50:20.526489Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"99f9e9c79f233aa7","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-18T17:50:23.070868Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"99f9e9c79f233aa7","rtt":"90.25454ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-18T17:50:23.070914Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"99f9e9c79f233aa7","rtt":"92.129317ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-18T17:50:24.527842Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"99f9e9c79f233aa7","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-18T17:50:24.527894Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"99f9e9c79f233aa7","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-18T17:50:28.072088Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"99f9e9c79f233aa7","rtt":"92.129317ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-18T17:50:28.072113Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"99f9e9c79f233aa7","rtt":"90.25454ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-18T17:50:28.529752Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"99f9e9c79f233aa7","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-18T17:50:28.529805Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"99f9e9c79f233aa7","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-10-18T17:50:29.518632Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"99f9e9c79f233aa7","stream-type":"stream Message"}
	{"level":"info","ts":"2025-10-18T17:50:29.518747Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"99f9e9c79f233aa7"}
	{"level":"info","ts":"2025-10-18T17:50:29.518785Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7"}
	{"level":"info","ts":"2025-10-18T17:50:29.531459Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"99f9e9c79f233aa7","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-10-18T17:50:29.531564Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7"}
	{"level":"info","ts":"2025-10-18T17:50:29.566541Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7"}
	{"level":"info","ts":"2025-10-18T17:50:29.568631Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"99f9e9c79f233aa7"}
	{"level":"warn","ts":"2025-10-18T17:50:55.845679Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.649618ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" limit:1 ","response":"range_response_count:1 size:4149"}
	{"level":"info","ts":"2025-10-18T17:50:55.845748Z","caller":"traceutil/trace.go:172","msg":"trace[408244320] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-66bc5c9577; range_end:; response_count:1; response_revision:3630; }","duration":"101.731957ms","start":"2025-10-18T17:50:55.744004Z","end":"2025-10-18T17:50:55.845736Z","steps":["trace[408244320] 'agreement among raft nodes before linearized reading'  (duration: 98.079225ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T17:51:04.884807Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"171.562827ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" limit:500 ","response":"range_response_count:500 size:368084"}
	{"level":"info","ts":"2025-10-18T17:51:04.884868Z","caller":"traceutil/trace.go:172","msg":"trace[1495845859] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:500; response_revision:3673; }","duration":"171.651402ms","start":"2025-10-18T17:51:04.713205Z","end":"2025-10-18T17:51:04.884856Z","steps":["trace[1495845859] 'range keys from bolt db'  (duration: 170.557243ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:51:09 up  1:33,  0 user,  load average: 4.93, 2.24, 1.42
	Linux ha-181800 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [45c33b76be4e1c5e61c683306b76aeb0fcbfda863ba2562aee4d85f222728470] <==
	E1018 17:50:44.474031       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 17:50:44.474037       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 17:50:44.474320       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1018 17:50:45.874766       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 17:50:45.874813       1 metrics.go:72] Registering metrics
	I1018 17:50:45.874874       1 controller.go:711] "Syncing nftables rules"
	I1018 17:50:54.472199       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:50:54.472257       1 main.go:301] handling current node
	I1018 17:50:54.476828       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 17:50:54.476996       1 main.go:324] Node ha-181800-m02 has CIDR [10.244.1.0/24] 
	I1018 17:50:54.477327       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.49.3 Flags: [] Table: 0 Realm: 0} 
	I1018 17:50:54.478599       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1018 17:50:54.478620       1 main.go:324] Node ha-181800-m03 has CIDR [10.244.2.0/24] 
	I1018 17:50:54.478703       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.49.4 Flags: [] Table: 0 Realm: 0} 
	I1018 17:50:54.478760       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 17:50:54.478767       1 main.go:324] Node ha-181800-m04 has CIDR [10.244.3.0/24] 
	I1018 17:50:54.478814       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0 Realm: 0} 
	I1018 17:51:04.473366       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 17:51:04.473481       1 main.go:324] Node ha-181800-m02 has CIDR [10.244.1.0/24] 
	I1018 17:51:04.473778       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1018 17:51:04.473819       1 main.go:324] Node ha-181800-m03 has CIDR [10.244.2.0/24] 
	I1018 17:51:04.473921       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 17:51:04.473929       1 main.go:324] Node ha-181800-m04 has CIDR [10.244.3.0/24] 
	I1018 17:51:04.474005       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:51:04.474011       1 main.go:301] handling current node
	
	
	==> kube-apiserver [787ba7d1db5885d5987b39cc564271b65d0c3534789595970e69e1fc2af692fa] <==
	I1018 17:50:08.637365       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 17:50:08.648586       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 17:50:08.649478       1 aggregator.go:171] initial CRD sync complete...
	I1018 17:50:08.658365       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 17:50:08.658478       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 17:50:08.658528       1 cache.go:39] Caches are synced for autoregister controller
	I1018 17:50:08.648742       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 17:50:08.660408       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 17:50:08.685820       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 17:50:08.685952       1 policy_source.go:240] refreshing policies
	I1018 17:50:08.705489       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1018 17:50:08.711819       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 17:50:08.721543       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 17:50:08.729935       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 17:50:08.730318       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 17:50:08.730492       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 17:50:08.730520       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 17:50:08.730960       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 17:50:08.746648       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 17:50:08.747504       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 17:50:09.243989       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 17:50:13.235609       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 17:50:36.709527       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 17:50:36.815877       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 17:50:46.351258       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [7df0159a16497989a32ac40623e8901229679b8716e6b590b84a0d3e1054f4d6] <==
	I1018 17:49:21.128362       1 server.go:150] Version: v1.34.1
	I1018 17:49:21.128401       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1018 17:49:22.017042       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1018 17:49:22.017075       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1018 17:49:22.017084       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1018 17:49:22.017089       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1018 17:49:22.017094       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1018 17:49:22.017098       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1018 17:49:22.017103       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1018 17:49:22.017107       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1018 17:49:22.017111       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1018 17:49:22.017116       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1018 17:49:22.017120       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1018 17:49:22.017125       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1018 17:49:22.035548       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 17:49:22.037326       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1018 17:49:22.037937       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1018 17:49:22.044391       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 17:49:22.056396       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1018 17:49:22.056496       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1018 17:49:22.056813       1 instance.go:239] Using reconciler: lease
	W1018 17:49:22.058127       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 17:49:42.034705       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 17:49:42.036960       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1018 17:49:42.058557       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [4cff6e37b85af70621f4b47faf3b854223fcae935be9ad45a9a99a523f33574b] <==
	I1018 17:50:17.456776       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 17:50:17.456827       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 17:50:17.459438       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 17:50:17.465430       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 17:50:17.465499       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 17:50:17.471365       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 17:50:17.475715       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 17:50:17.477471       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 17:50:17.478740       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800-m03"
	I1018 17:50:17.478810       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800-m04"
	I1018 17:50:17.478834       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800"
	I1018 17:50:17.478868       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800-m02"
	I1018 17:50:17.479116       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 17:50:17.483521       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 17:50:17.491656       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 17:50:17.491691       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 17:50:17.491699       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 17:50:17.491580       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 17:50:17.503394       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 17:50:17.508362       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 17:50:17.509154       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 17:50:50.411726       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-181800-m04"
	I1018 17:50:55.780269       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-kgtwl EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-kgtwl\": the object has been modified; please apply your changes to the latest version and try again"
	I1018 17:50:55.782431       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"9f28e5d3-f804-46e7-b8a3-f9f96165b245", APIVersion:"v1", ResourceVersion:"306", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-kgtwl EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-kgtwl": the object has been modified; please apply your changes to the latest version and try again
	E1018 17:50:55.860481       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-controller-manager [bd6f9d7be603729a0a5200b910dc4c63002c84e58b83cb98debb890cf0bf202d] <==
	I1018 17:49:24.964069       1 serving.go:386] Generated self-signed cert in-memory
	I1018 17:49:25.434782       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1018 17:49:25.434808       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 17:49:25.436324       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 17:49:25.436542       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1018 17:49:25.436706       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1018 17:49:25.436723       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 17:49:45.439754       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-proxy [8aea864f19933a28597488b60aa422e08bea2bfd07e84bd2fec57087062dc95f] <==
	I1018 17:50:15.663641       1 server_linux.go:53] "Using iptables proxy"
	I1018 17:50:16.334903       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 17:50:16.464013       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 17:50:16.464050       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 17:50:16.464138       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 17:50:16.493669       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 17:50:16.493728       1 server_linux.go:132] "Using iptables Proxier"
	I1018 17:50:16.497992       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 17:50:16.498301       1 server.go:527] "Version info" version="v1.34.1"
	I1018 17:50:16.498377       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 17:50:16.507101       1 config.go:200] "Starting service config controller"
	I1018 17:50:16.507206       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 17:50:16.507258       1 config.go:106] "Starting endpoint slice config controller"
	I1018 17:50:16.507322       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 17:50:16.507360       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 17:50:16.507388       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 17:50:16.510070       1 config.go:309] "Starting node config controller"
	I1018 17:50:16.510095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 17:50:16.510103       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 17:50:16.607760       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 17:50:16.607802       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 17:50:16.607844       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fb83e2f9880f48e77ccba9ff1a0240a5eacc8c5f0b7758c70e7c19289ba8795a] <==
	E1018 17:49:18.573967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 17:49:18.841410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 17:49:19.275891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 17:49:19.842357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 17:49:20.476775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 17:49:37.786434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 17:49:40.793324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 17:49:41.589579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 17:49:41.815367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 17:49:41.825500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 17:49:42.301676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 17:49:43.065969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:39986->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 17:49:43.066081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:40092->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 17:49:43.066189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:40068->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 17:49:43.066278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:40058->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 17:49:43.066363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:40052->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 17:49:43.066439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:40100->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 17:49:43.066513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:40112->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 17:49:43.066604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:40060->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 17:49:43.066611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:39994->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 17:49:43.066695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:40008->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 17:49:43.066704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:40038->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 17:49:43.066779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:39980->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 17:49:43.066800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:40040->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1018 17:50:11.465530       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 17:50:12 ha-181800 kubelet[798]: I1018 17:50:12.842479     798 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: E1018 17:50:12.856112     798 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-181800\" already exists" pod="kube-system/kube-controller-manager-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: I1018 17:50:12.856349     798 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: E1018 17:50:12.867959     798 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-181800\" already exists" pod="kube-system/kube-scheduler-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: I1018 17:50:12.868003     798 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: E1018 17:50:12.881408     798 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-vip-ha-181800\" already exists" pod="kube-system/kube-vip-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: I1018 17:50:12.881451     798 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: E1018 17:50:12.896352     798 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-181800\" already exists" pod="kube-system/etcd-ha-181800"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.091654     798 apiserver.go:52] "Watching apiserver"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.099077     798 scope.go:117] "RemoveContainer" containerID="bd6f9d7be603729a0a5200b910dc4c63002c84e58b83cb98debb890cf0bf202d"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.216894     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5edfc356-9d49-4895-b36a-06c2bd39155a-xtables-lock\") pod \"kindnet-72mvm\" (UID: \"5edfc356-9d49-4895-b36a-06c2bd39155a\") " pod="kube-system/kindnet-72mvm"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.217054     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15b89226-91ae-478f-acfe-7841776b1377-xtables-lock\") pod \"kube-proxy-stgvm\" (UID: \"15b89226-91ae-478f-acfe-7841776b1377\") " pod="kube-system/kube-proxy-stgvm"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.217077     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15b89226-91ae-478f-acfe-7841776b1377-lib-modules\") pod \"kube-proxy-stgvm\" (UID: \"15b89226-91ae-478f-acfe-7841776b1377\") " pod="kube-system/kube-proxy-stgvm"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.217093     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3c6521cd-8e1b-46aa-96a3-39e475e1426c-tmp\") pod \"storage-provisioner\" (UID: \"3c6521cd-8e1b-46aa-96a3-39e475e1426c\") " pod="kube-system/storage-provisioner"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.217110     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5edfc356-9d49-4895-b36a-06c2bd39155a-cni-cfg\") pod \"kindnet-72mvm\" (UID: \"5edfc356-9d49-4895-b36a-06c2bd39155a\") " pod="kube-system/kindnet-72mvm"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.217127     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5edfc356-9d49-4895-b36a-06c2bd39155a-lib-modules\") pod \"kindnet-72mvm\" (UID: \"5edfc356-9d49-4895-b36a-06c2bd39155a\") " pod="kube-system/kindnet-72mvm"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.222063     798 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.266801     798 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 17:50:13 ha-181800 kubelet[798]: W1018 17:50:13.559633     798 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/crio-c1b08873679284c397e63dc0b5e86a2778290edfaa47a2d3af86e787870c2624 WatchSource:0}: Error finding container c1b08873679284c397e63dc0b5e86a2778290edfaa47a2d3af86e787870c2624: Status 404 returned error can't find the container with id c1b08873679284c397e63dc0b5e86a2778290edfaa47a2d3af86e787870c2624
	Oct 18 17:50:13 ha-181800 kubelet[798]: W1018 17:50:13.569533     798 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/crio-0e97ce88bd2d3a36101a0a9930710ba30f34091e61ed0ed0249bd68b5d0f6fa7 WatchSource:0}: Error finding container 0e97ce88bd2d3a36101a0a9930710ba30f34091e61ed0ed0249bd68b5d0f6fa7: Status 404 returned error can't find the container with id 0e97ce88bd2d3a36101a0a9930710ba30f34091e61ed0ed0249bd68b5d0f6fa7
	Oct 18 17:50:13 ha-181800 kubelet[798]: W1018 17:50:13.789592     798 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/crio-2d6e6e05d930c610e9ac4942479166d3061f0b37055dbc9645478f2923f1ff53 WatchSource:0}: Error finding container 2d6e6e05d930c610e9ac4942479166d3061f0b37055dbc9645478f2923f1ff53: Status 404 returned error can't find the container with id 2d6e6e05d930c610e9ac4942479166d3061f0b37055dbc9645478f2923f1ff53
	Oct 18 17:50:17 ha-181800 kubelet[798]: E1018 17:50:17.091585     798 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/351deab77f22682d337e98537451625e6f5def60ef97378fe2ea489cd9cb173d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/351deab77f22682d337e98537451625e6f5def60ef97378fe2ea489cd9cb173d/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-181800_9656c3d6ff12279b641632c7e3275a8a/kube-controller-manager/6.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-181800_9656c3d6ff12279b641632c7e3275a8a/kube-controller-manager/6.log: no such file or directory
	Oct 18 17:50:17 ha-181800 kubelet[798]: E1018 17:50:17.097904     798 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3a8ceae8950ea9bca2bf6a05f4cb7633f55f4458c755f32741110642edbfd7ba/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3a8ceae8950ea9bca2bf6a05f4cb7633f55f4458c755f32741110642edbfd7ba/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-apiserver-ha-181800_f173b0166ea7317b529b58e20ef8d65f/kube-apiserver/6.log" to get inode usage: stat /var/log/pods/kube-system_kube-apiserver-ha-181800_f173b0166ea7317b529b58e20ef8d65f/kube-apiserver/6.log: no such file or directory
	Oct 18 17:50:17 ha-181800 kubelet[798]: E1018 17:50:17.148404     798 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/crio/crio-dad8e190116effc9294125133d608015a4f2ec86c95f308f26d5e4d771de4985\": RecentStats: unable to find data in memory cache]"
	Oct 18 17:50:45 ha-181800 kubelet[798]: I1018 17:50:45.570659     798 scope.go:117] "RemoveContainer" containerID="f2f15c809753a0cd811b332e6f6a8f9b5be888da593a2286ff085903e5ec3a12"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-181800 -n ha-181800
helpers_test.go:269: (dbg) Run:  kubectl --context ha-181800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (94.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 node add --control-plane --alsologtostderr -v 5
E1018 17:52:03.655204    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-181800 node add --control-plane --alsologtostderr -v 5: (1m29.351891832s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5: (1.359266107s)
ha_test.go:618: status says not all three control-plane nodes are present: args "out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5": ha-181800
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-181800-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-181800-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-181800-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-181800-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:621: status says not all four hosts are running: args "out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5": ha-181800
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-181800-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-181800-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-181800-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-181800-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:624: status says not all four kubelets are running: args "out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5": ha-181800
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-181800-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-181800-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-181800-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-181800-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:627: status says not all three apiservers are running: args "out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5": ha-181800
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-181800-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-181800-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-181800-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-181800-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-181800
helpers_test.go:243: (dbg) docker inspect ha-181800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2",
	        "Created": "2025-10-18T17:32:56.632116312Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 69617,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T17:48:09.683613005Z",
	            "FinishedAt": "2025-10-18T17:48:08.862033359Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/hostname",
	        "HostsPath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/hosts",
	        "LogPath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2-json.log",
	        "Name": "/ha-181800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-181800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-181800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2",
	                "LowerDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-181800",
	                "Source": "/var/lib/docker/volumes/ha-181800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-181800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-181800",
	                "name.minikube.sigs.k8s.io": "ha-181800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4110ab73f7f9137e0eb013438b540b426c3fa9fedc1bed76ec7ffcc4fc35499f",
	            "SandboxKey": "/var/run/docker/netns/4110ab73f7f9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32818"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32819"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32822"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32820"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32821"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-181800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:81:2f:47:7d:4c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "903568cdf824d38f52cb9a58c116a852c83eb599cf8cc87e25ba21b593e45142",
	                    "EndpointID": "9a2af9d91b868a8642ef1db81d818bc623c9c1134408c932f61ec269578e0c92",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-181800",
	                        "5743bf3218eb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-181800 -n ha-181800
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-181800 logs -n 25: (2.149646172s)
helpers_test.go:260: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-181800 ssh -n ha-181800-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test_ha-181800-m03_ha-181800-m04.txt                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp testdata/cp-test.txt ha-181800-m04:/home/docker/cp-test.txt                                                             │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1463328482/001/cp-test_ha-181800-m04.txt │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt ha-181800:/home/docker/cp-test_ha-181800-m04_ha-181800.txt                       │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800 sudo cat /home/docker/cp-test_ha-181800-m04_ha-181800.txt                                                 │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt ha-181800-m02:/home/docker/cp-test_ha-181800-m04_ha-181800-m02.txt               │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m02 sudo cat /home/docker/cp-test_ha-181800-m04_ha-181800-m02.txt                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt ha-181800-m03:/home/docker/cp-test_ha-181800-m04_ha-181800-m03.txt               │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m03 sudo cat /home/docker/cp-test_ha-181800-m04_ha-181800-m03.txt                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ node    │ ha-181800 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ node    │ ha-181800 node start m02 --alsologtostderr -v 5                                                                                      │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:39 UTC │
	│ node    │ ha-181800 node list --alsologtostderr -v 5                                                                                           │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:39 UTC │                     │
	│ stop    │ ha-181800 stop --alsologtostderr -v 5                                                                                                │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:39 UTC │ 18 Oct 25 17:39 UTC │
	│ start   │ ha-181800 start --wait true --alsologtostderr -v 5                                                                                   │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:39 UTC │                     │
	│ node    │ ha-181800 node list --alsologtostderr -v 5                                                                                           │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:47 UTC │                     │
	│ node    │ ha-181800 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:47 UTC │                     │
	│ stop    │ ha-181800 stop --alsologtostderr -v 5                                                                                                │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:47 UTC │ 18 Oct 25 17:48 UTC │
	│ start   │ ha-181800 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:48 UTC │ 18 Oct 25 17:51 UTC │
	│ node    │ ha-181800 node add --control-plane --alsologtostderr -v 5                                                                            │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:51 UTC │ 18 Oct 25 17:52 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 17:48:09
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 17:48:09.416034   69488 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:48:09.416413   69488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:48:09.416429   69488 out.go:374] Setting ErrFile to fd 2...
	I1018 17:48:09.416435   69488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:48:09.416751   69488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:48:09.417210   69488 out.go:368] Setting JSON to false
	I1018 17:48:09.418048   69488 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5439,"bootTime":1760804251,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 17:48:09.418116   69488 start.go:141] virtualization:  
	I1018 17:48:09.421406   69488 out.go:179] * [ha-181800] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 17:48:09.425201   69488 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 17:48:09.425270   69488 notify.go:220] Checking for updates...
	I1018 17:48:09.431395   69488 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 17:48:09.434249   69488 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:48:09.437177   69488 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 17:48:09.439990   69488 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 17:48:09.442873   69488 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 17:48:09.446186   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:48:09.446753   69488 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 17:48:09.469689   69488 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 17:48:09.469810   69488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:48:09.525756   69488 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 17:48:09.516473467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:48:09.525901   69488 docker.go:318] overlay module found
	I1018 17:48:09.529121   69488 out.go:179] * Using the docker driver based on existing profile
	I1018 17:48:09.532020   69488 start.go:305] selected driver: docker
	I1018 17:48:09.532065   69488 start.go:925] validating driver "docker" against &{Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:48:09.532200   69488 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 17:48:09.532300   69488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:48:09.595274   69488 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 17:48:09.586260967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:48:09.595672   69488 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:48:09.595711   69488 cni.go:84] Creating CNI manager for ""
	I1018 17:48:09.595769   69488 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1018 17:48:09.595821   69488 start.go:349] cluster config:
	{Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:48:09.600762   69488 out.go:179] * Starting "ha-181800" primary control-plane node in "ha-181800" cluster
	I1018 17:48:09.603624   69488 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:48:09.606573   69488 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:48:09.609415   69488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:48:09.609455   69488 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:48:09.609472   69488 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 17:48:09.609485   69488 cache.go:58] Caching tarball of preloaded images
	I1018 17:48:09.609580   69488 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:48:09.609590   69488 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:48:09.609731   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:48:09.629715   69488 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:48:09.629738   69488 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:48:09.629751   69488 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:48:09.629773   69488 start.go:360] acquireMachinesLock for ha-181800: {Name:mk3f5dfba2ab7d01f94f924dfcc5edab5f076901 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:48:09.629829   69488 start.go:364] duration metric: took 36.414µs to acquireMachinesLock for "ha-181800"
	I1018 17:48:09.629854   69488 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:48:09.629859   69488 fix.go:54] fixHost starting: 
	I1018 17:48:09.630111   69488 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:48:09.646601   69488 fix.go:112] recreateIfNeeded on ha-181800: state=Stopped err=<nil>
	W1018 17:48:09.646633   69488 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:48:09.649905   69488 out.go:252] * Restarting existing docker container for "ha-181800" ...
	I1018 17:48:09.649988   69488 cli_runner.go:164] Run: docker start ha-181800
	I1018 17:48:09.903186   69488 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:48:09.925021   69488 kic.go:430] container "ha-181800" state is running.
	I1018 17:48:09.925620   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:48:09.948773   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:48:09.949327   69488 machine.go:93] provisionDockerMachine start ...
	I1018 17:48:09.949403   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:09.972918   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:09.973247   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1018 17:48:09.973265   69488 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:48:09.973813   69488 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 17:48:13.124675   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800
	
	I1018 17:48:13.124706   69488 ubuntu.go:182] provisioning hostname "ha-181800"
	I1018 17:48:13.124768   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:13.142493   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:13.142802   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1018 17:48:13.142819   69488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181800 && echo "ha-181800" | sudo tee /etc/hostname
	I1018 17:48:13.298978   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800
	
	I1018 17:48:13.299071   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:13.318549   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:13.318864   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1018 17:48:13.318885   69488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181800/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:48:13.464891   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:48:13.464913   69488 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:48:13.464930   69488 ubuntu.go:190] setting up certificates
	I1018 17:48:13.464957   69488 provision.go:84] configureAuth start
	I1018 17:48:13.465015   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:48:13.482208   69488 provision.go:143] copyHostCerts
	I1018 17:48:13.482250   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:48:13.482283   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:48:13.482302   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:48:13.482377   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:48:13.482463   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:48:13.482486   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:48:13.482493   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:48:13.482520   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:48:13.482562   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:48:13.482582   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:48:13.482588   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:48:13.482612   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:48:13.482660   69488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.ha-181800 san=[127.0.0.1 192.168.49.2 ha-181800 localhost minikube]
	I1018 17:48:14.423915   69488 provision.go:177] copyRemoteCerts
	I1018 17:48:14.423988   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:48:14.424038   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:14.441172   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:48:14.544666   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 17:48:14.544730   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1018 17:48:14.562271   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 17:48:14.562355   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:48:14.579774   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 17:48:14.579882   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:48:14.597738   69488 provision.go:87] duration metric: took 1.132758135s to configureAuth
	I1018 17:48:14.597766   69488 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:48:14.598014   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:48:14.598118   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:14.616530   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:14.616832   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1018 17:48:14.616852   69488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:48:14.938623   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:48:14.938694   69488 machine.go:96] duration metric: took 4.989343324s to provisionDockerMachine
	I1018 17:48:14.938719   69488 start.go:293] postStartSetup for "ha-181800" (driver="docker")
	I1018 17:48:14.938743   69488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:48:14.938827   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:48:14.938907   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:14.961006   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:48:15.069145   69488 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:48:15.072788   69488 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:48:15.072820   69488 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:48:15.072832   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:48:15.072889   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:48:15.073008   69488 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:48:15.073020   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 17:48:15.073124   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 17:48:15.080710   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:48:15.098679   69488 start.go:296] duration metric: took 159.932309ms for postStartSetup
	I1018 17:48:15.098839   69488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:48:15.098888   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:15.116684   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:48:15.217789   69488 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:48:15.222543   69488 fix.go:56] duration metric: took 5.59267659s for fixHost
	I1018 17:48:15.222570   69488 start.go:83] releasing machines lock for "ha-181800", held for 5.59272729s
	I1018 17:48:15.222640   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:48:15.239602   69488 ssh_runner.go:195] Run: cat /version.json
	I1018 17:48:15.239657   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:15.239935   69488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:48:15.239989   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:15.258489   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:48:15.259704   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:48:15.360628   69488 ssh_runner.go:195] Run: systemctl --version
	I1018 17:48:15.453252   69488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:48:15.490459   69488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:48:15.494882   69488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:48:15.494987   69488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:48:15.502526   69488 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:48:15.502555   69488 start.go:495] detecting cgroup driver to use...
	I1018 17:48:15.502585   69488 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:48:15.502634   69488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:48:15.518083   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:48:15.531171   69488 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:48:15.531254   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:48:15.547013   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:48:15.559697   69488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:48:15.666369   69488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:48:15.774518   69488 docker.go:234] disabling docker service ...
	I1018 17:48:15.774580   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:48:15.789730   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:48:15.802288   69488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:48:15.919408   69488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:48:16.029842   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:48:16.043317   69488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:48:16.059310   69488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:48:16.059453   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.069280   69488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:48:16.069350   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.078814   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.087874   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.097837   69488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:48:16.106890   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.115708   69488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.123935   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.132770   69488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:48:16.140320   69488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:48:16.147761   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:48:16.260916   69488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:48:16.404712   69488 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:48:16.404830   69488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:48:16.408509   69488 start.go:563] Will wait 60s for crictl version
	I1018 17:48:16.408623   69488 ssh_runner.go:195] Run: which crictl
	I1018 17:48:16.411907   69488 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:48:16.435137   69488 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:48:16.435295   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:48:16.466039   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:48:16.501936   69488 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:48:16.504878   69488 cli_runner.go:164] Run: docker network inspect ha-181800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:48:16.520780   69488 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:48:16.524665   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:48:16.534613   69488 kubeadm.go:883] updating cluster {Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 17:48:16.534762   69488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:48:16.534819   69488 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 17:48:16.574503   69488 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 17:48:16.574531   69488 crio.go:433] Images already preloaded, skipping extraction
	I1018 17:48:16.574590   69488 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 17:48:16.600203   69488 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 17:48:16.600227   69488 cache_images.go:85] Images are preloaded, skipping loading
	I1018 17:48:16.600237   69488 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 17:48:16.600342   69488 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-181800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:48:16.600422   69488 ssh_runner.go:195] Run: crio config
	I1018 17:48:16.665910   69488 cni.go:84] Creating CNI manager for ""
	I1018 17:48:16.665937   69488 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1018 17:48:16.665961   69488 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 17:48:16.665986   69488 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-181800 NodeName:ha-181800 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 17:48:16.666112   69488 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-181800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 17:48:16.666132   69488 kube-vip.go:115] generating kube-vip config ...
	I1018 17:48:16.666191   69488 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 17:48:16.678158   69488 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:48:16.678333   69488 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 17:48:16.678419   69488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:48:16.686215   69488 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:48:16.686327   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1018 17:48:16.693873   69488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1018 17:48:16.706512   69488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:48:16.719311   69488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1018 17:48:16.731738   69488 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 17:48:16.744107   69488 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 17:48:16.747479   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:48:16.756979   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:48:16.873983   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:48:16.890078   69488 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800 for IP: 192.168.49.2
	I1018 17:48:16.890141   69488 certs.go:195] generating shared ca certs ...
	I1018 17:48:16.890170   69488 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:48:16.890342   69488 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:48:16.890408   69488 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:48:16.890429   69488 certs.go:257] generating profile certs ...
	I1018 17:48:16.890571   69488 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key
	I1018 17:48:16.890683   69488 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.46a58690
	I1018 17:48:16.890745   69488 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key
	I1018 17:48:16.890767   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 17:48:16.890806   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 17:48:16.890839   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 17:48:16.890866   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 17:48:16.890905   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 17:48:16.890937   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 17:48:16.890965   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 17:48:16.891003   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 17:48:16.891075   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:48:16.891135   69488 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:48:16.891163   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:48:16.891206   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:48:16.891265   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:48:16.891308   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:48:16.891389   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:48:16.891447   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 17:48:16.891488   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:48:16.891521   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 17:48:16.892071   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:48:16.910107   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:48:16.927560   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:48:16.944252   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:48:16.961007   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 17:48:16.981715   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 17:48:17.002129   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 17:48:17.028151   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 17:48:17.050134   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:48:17.076842   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:48:17.102342   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:48:17.120809   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 17:48:17.135197   69488 ssh_runner.go:195] Run: openssl version
	I1018 17:48:17.141316   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:48:17.149779   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:48:17.156384   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:48:17.156498   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:48:17.198104   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:48:17.206025   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:48:17.214061   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:48:17.217558   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:48:17.217636   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:48:17.259653   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:48:17.267330   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:48:17.275410   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:48:17.278912   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:48:17.279004   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:48:17.319663   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:48:17.327893   69488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:48:17.331787   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 17:48:17.372669   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 17:48:17.413640   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 17:48:17.455669   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 17:48:17.503310   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 17:48:17.553128   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 17:48:17.610923   69488 kubeadm.go:400] StartCluster: {Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:48:17.611069   69488 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:48:17.611141   69488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:48:17.693793   69488 cri.go:89] found id: "42139c5070f82bb1e1dd7564661f58a74b134ab219b910335d022b2235e65fc0"
	I1018 17:48:17.693817   69488 cri.go:89] found id: "405d4b2711179ef2be985a5942049e2e36688b992d1fd9f96f2e882cfa95bfd5"
	I1018 17:48:17.693822   69488 cri.go:89] found id: "fb83e2f9880f48e77ccba9ff1a0240a5eacc8c5f0b7758c70e7c19289ba8795a"
	I1018 17:48:17.693826   69488 cri.go:89] found id: ""
	I1018 17:48:17.693886   69488 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 17:48:17.727781   69488 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:48:17Z" level=error msg="open /run/runc: no such file or directory"
	I1018 17:48:17.727885   69488 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 17:48:17.752985   69488 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 17:48:17.753011   69488 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 17:48:17.753077   69488 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 17:48:17.766549   69488 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:48:17.766998   69488 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-181800" does not appear in /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:48:17.767116   69488 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-2509/kubeconfig needs updating (will repair): [kubeconfig missing "ha-181800" cluster setting kubeconfig missing "ha-181800" context setting]
	I1018 17:48:17.767408   69488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:48:17.768000   69488 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 17:48:17.768691   69488 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 17:48:17.768713   69488 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 17:48:17.768754   69488 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1018 17:48:17.768718   69488 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 17:48:17.768800   69488 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 17:48:17.768817   69488 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 17:48:17.769158   69488 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 17:48:17.777893   69488 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1018 17:48:17.777928   69488 kubeadm.go:601] duration metric: took 24.910349ms to restartPrimaryControlPlane
	I1018 17:48:17.777937   69488 kubeadm.go:402] duration metric: took 167.022952ms to StartCluster
	I1018 17:48:17.777952   69488 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:48:17.778019   69488 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:48:17.778655   69488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:48:17.778876   69488 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:48:17.778908   69488 start.go:241] waiting for startup goroutines ...
	I1018 17:48:17.778916   69488 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 17:48:17.779460   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:48:17.784791   69488 out.go:179] * Enabled addons: 
	I1018 17:48:17.787780   69488 addons.go:514] duration metric: took 8.843165ms for enable addons: enabled=[]
	I1018 17:48:17.787841   69488 start.go:246] waiting for cluster config update ...
	I1018 17:48:17.787851   69488 start.go:255] writing updated cluster config ...
	I1018 17:48:17.791154   69488 out.go:203] 
	I1018 17:48:17.794423   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:48:17.794545   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:48:17.797951   69488 out.go:179] * Starting "ha-181800-m02" control-plane node in "ha-181800" cluster
	I1018 17:48:17.800906   69488 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:48:17.803852   69488 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:48:17.806813   69488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:48:17.806848   69488 cache.go:58] Caching tarball of preloaded images
	I1018 17:48:17.806951   69488 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:48:17.806966   69488 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:48:17.807089   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:48:17.807301   69488 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:48:17.833480   69488 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:48:17.833505   69488 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:48:17.833520   69488 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:48:17.833542   69488 start.go:360] acquireMachinesLock for ha-181800-m02: {Name:mk36a488c0fbfc8557c6ba291b969aad85b45635 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:48:17.833604   69488 start.go:364] duration metric: took 42.142µs to acquireMachinesLock for "ha-181800-m02"
	I1018 17:48:17.833629   69488 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:48:17.833638   69488 fix.go:54] fixHost starting: m02
	I1018 17:48:17.833888   69488 cli_runner.go:164] Run: docker container inspect ha-181800-m02 --format={{.State.Status}}
	I1018 17:48:17.853969   69488 fix.go:112] recreateIfNeeded on ha-181800-m02: state=Stopped err=<nil>
	W1018 17:48:17.853999   69488 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:48:17.859511   69488 out.go:252] * Restarting existing docker container for "ha-181800-m02" ...
	I1018 17:48:17.859599   69488 cli_runner.go:164] Run: docker start ha-181800-m02
	I1018 17:48:18.199583   69488 cli_runner.go:164] Run: docker container inspect ha-181800-m02 --format={{.State.Status}}
	I1018 17:48:18.226549   69488 kic.go:430] container "ha-181800-m02" state is running.
	I1018 17:48:18.226893   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:48:18.262995   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:48:18.263226   69488 machine.go:93] provisionDockerMachine start ...
	I1018 17:48:18.263282   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:48:18.293143   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:18.293452   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1018 17:48:18.293466   69488 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:48:18.294119   69488 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 17:48:21.560416   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m02
	
	I1018 17:48:21.560480   69488 ubuntu.go:182] provisioning hostname "ha-181800-m02"
	I1018 17:48:21.560583   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:48:21.588400   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:21.588705   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1018 17:48:21.588717   69488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181800-m02 && echo "ha-181800-m02" | sudo tee /etc/hostname
	I1018 17:48:21.918738   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m02
	
	I1018 17:48:21.918888   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:48:21.950544   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:21.950842   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1018 17:48:21.950857   69488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181800-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181800-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181800-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:48:22.217685   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:48:22.217712   69488 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:48:22.217727   69488 ubuntu.go:190] setting up certificates
	I1018 17:48:22.217741   69488 provision.go:84] configureAuth start
	I1018 17:48:22.217804   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:48:22.255770   69488 provision.go:143] copyHostCerts
	I1018 17:48:22.255810   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:48:22.255843   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:48:22.255850   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:48:22.255928   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:48:22.255999   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:48:22.256017   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:48:22.256021   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:48:22.256045   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:48:22.256080   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:48:22.256096   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:48:22.256100   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:48:22.256121   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:48:22.256204   69488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.ha-181800-m02 san=[127.0.0.1 192.168.49.3 ha-181800-m02 localhost minikube]
	I1018 17:48:22.398509   69488 provision.go:177] copyRemoteCerts
	I1018 17:48:22.398627   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:48:22.398703   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:48:22.417071   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:48:22.539435   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 17:48:22.539497   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:48:22.590740   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 17:48:22.590799   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:48:22.640636   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 17:48:22.640749   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 17:48:22.682470   69488 provision.go:87] duration metric: took 464.715425ms to configureAuth
	I1018 17:48:22.682541   69488 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:48:22.682832   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:48:22.682993   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:48:22.710684   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:22.710986   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1018 17:48:22.711001   69488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:49:53.355970   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:49:53.355994   69488 machine.go:96] duration metric: took 1m35.092758423s to provisionDockerMachine
	I1018 17:49:53.356005   69488 start.go:293] postStartSetup for "ha-181800-m02" (driver="docker")
	I1018 17:49:53.356016   69488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:49:53.356073   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:49:53.356118   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:49:53.374240   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:49:53.476619   69488 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:49:53.479822   69488 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:49:53.479849   69488 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:49:53.479860   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:49:53.479932   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:49:53.480042   69488 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:49:53.480053   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 17:49:53.480150   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 17:49:53.487506   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:49:53.503781   69488 start.go:296] duration metric: took 147.726679ms for postStartSetup
	I1018 17:49:53.503861   69488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:49:53.503907   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:49:53.521965   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:49:53.622051   69488 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:49:53.627407   69488 fix.go:56] duration metric: took 1m35.793761422s for fixHost
	I1018 17:49:53.627431   69488 start.go:83] releasing machines lock for "ha-181800-m02", held for 1m35.793813517s
	I1018 17:49:53.627503   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:49:53.647527   69488 out.go:179] * Found network options:
	I1018 17:49:53.650482   69488 out.go:179]   - NO_PROXY=192.168.49.2
	W1018 17:49:53.653336   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:49:53.653390   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	I1018 17:49:53.653464   69488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:49:53.653510   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:49:53.653793   69488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:49:53.653863   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:49:53.671905   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:49:53.683540   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:49:53.861179   69488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:49:53.865770   69488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:49:53.865856   69488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:49:53.873670   69488 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:49:53.873694   69488 start.go:495] detecting cgroup driver to use...
	I1018 17:49:53.873745   69488 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:49:53.873813   69488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:49:53.888526   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:49:53.901761   69488 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:49:53.901850   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:49:53.917699   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:49:53.931789   69488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:49:54.071500   69488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:49:54.203057   69488 docker.go:234] disabling docker service ...
	I1018 17:49:54.203122   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:49:54.218563   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:49:54.232433   69488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:49:54.361440   69488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:49:54.490330   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:49:54.503221   69488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:49:54.517805   69488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:49:54.517883   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.527169   69488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:49:54.527231   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.536041   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.544703   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.553243   69488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:49:54.562614   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.571510   69488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.579788   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.588456   69488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:49:54.595820   69488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:49:54.602817   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:49:54.728528   69488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:49:58.621131   69488 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.89256859s)
	I1018 17:49:58.626115   69488 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:49:58.626223   69488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:49:58.631167   69488 start.go:563] Will wait 60s for crictl version
	I1018 17:49:58.631232   69488 ssh_runner.go:195] Run: which crictl
	I1018 17:49:58.639191   69488 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:49:58.672795   69488 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:49:58.672878   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:49:58.723386   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:49:58.777499   69488 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:49:58.780571   69488 out.go:179]   - env NO_PROXY=192.168.49.2
	I1018 17:49:58.783632   69488 cli_runner.go:164] Run: docker network inspect ha-181800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:49:58.815077   69488 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:49:58.819329   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:49:58.831215   69488 mustload.go:65] Loading cluster: ha-181800
	I1018 17:49:58.831449   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:49:58.831716   69488 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:49:58.862708   69488 host.go:66] Checking if "ha-181800" exists ...
	I1018 17:49:58.863022   69488 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800 for IP: 192.168.49.3
	I1018 17:49:58.863040   69488 certs.go:195] generating shared ca certs ...
	I1018 17:49:58.863058   69488 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:49:58.863172   69488 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:49:58.863215   69488 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:49:58.863222   69488 certs.go:257] generating profile certs ...
	I1018 17:49:58.863290   69488 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key
	I1018 17:49:58.863337   69488 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.887e0b27
	I1018 17:49:58.863381   69488 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key
	I1018 17:49:58.863390   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 17:49:58.863402   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 17:49:58.863414   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 17:49:58.863425   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 17:49:58.863435   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 17:49:58.863448   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 17:49:58.863470   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 17:49:58.863481   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 17:49:58.863531   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:49:58.863559   69488 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:49:58.863567   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:49:58.863589   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:49:58.863615   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:49:58.863635   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:49:58.863676   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:49:58.863709   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 17:49:58.863731   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:49:58.863743   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 17:49:58.863871   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:49:58.882935   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:49:58.981280   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1018 17:49:58.984884   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1018 17:49:58.992968   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1018 17:49:58.996547   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1018 17:49:59.005742   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1018 17:49:59.009863   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1018 17:49:59.018651   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1018 17:49:59.022300   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1018 17:49:59.030647   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1018 17:49:59.034128   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1018 17:49:59.042303   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1018 17:49:59.045696   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1018 17:49:59.054134   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:49:59.072336   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:49:59.090250   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:49:59.107793   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:49:59.124795   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 17:49:59.150615   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 17:49:59.169033   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 17:49:59.186177   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 17:49:59.203120   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:49:59.220145   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:49:59.237999   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:49:59.257279   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1018 17:49:59.269634   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1018 17:49:59.282735   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1018 17:49:59.295341   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1018 17:49:59.308329   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1018 17:49:59.320556   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1018 17:49:59.332714   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1018 17:49:59.348902   69488 ssh_runner.go:195] Run: openssl version
	I1018 17:49:59.356738   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:49:59.365172   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:49:59.368839   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:49:59.368976   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:49:59.414784   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:49:59.422423   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:49:59.430191   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:49:59.433619   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:49:59.433727   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:49:59.474255   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:49:59.481911   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:49:59.490061   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:49:59.493763   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:49:59.493835   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:49:59.534567   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:49:59.542475   69488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:49:59.546230   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 17:49:59.592499   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 17:49:59.635764   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 17:49:59.676750   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 17:49:59.719668   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 17:49:59.760653   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 17:49:59.801453   69488 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1018 17:49:59.801594   69488 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-181800-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:49:59.801625   69488 kube-vip.go:115] generating kube-vip config ...
	I1018 17:49:59.801676   69488 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 17:49:59.813138   69488 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:49:59.813221   69488 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 17:49:59.813313   69488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:49:59.820930   69488 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:49:59.821061   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1018 17:49:59.828485   69488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 17:49:59.840643   69488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:49:59.853675   69488 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 17:49:59.867836   69488 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 17:49:59.871456   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:49:59.881052   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:00.019627   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:50:00.063785   69488 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:50:00.065404   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:00.068131   69488 out.go:179] * Verifying Kubernetes components...
	I1018 17:50:00.071263   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:00.372789   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:50:00.393030   69488 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1018 17:50:00.393170   69488 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1018 17:50:00.393487   69488 node_ready.go:35] waiting up to 6m0s for node "ha-181800-m02" to be "Ready" ...
	W1018 17:50:02.394400   69488 node_ready.go:55] error getting node "ha-181800-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-181800-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1018 17:50:08.470080   69488 node_ready.go:57] node "ha-181800-m02" has "Ready":"Unknown" status (will retry)
	I1018 17:50:09.421305   69488 node_ready.go:49] node "ha-181800-m02" is "Ready"
	I1018 17:50:09.421384   69488 node_ready.go:38] duration metric: took 9.02787205s for node "ha-181800-m02" to be "Ready" ...
	I1018 17:50:09.421422   69488 api_server.go:52] waiting for apiserver process to appear ...
	I1018 17:50:09.421500   69488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:50:09.447456   69488 api_server.go:72] duration metric: took 9.383624261s to wait for apiserver process to appear ...
	I1018 17:50:09.447520   69488 api_server.go:88] waiting for apiserver healthz status ...
	I1018 17:50:09.447553   69488 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 17:50:09.466347   69488 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 17:50:09.466422   69488 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 17:50:09.947999   69488 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 17:50:09.958418   69488 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 17:50:09.958509   69488 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 17:50:10.447814   69488 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 17:50:10.462608   69488 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 17:50:10.463984   69488 api_server.go:141] control plane version: v1.34.1
	I1018 17:50:10.464041   69488 api_server.go:131] duration metric: took 1.016500993s to wait for apiserver health ...
	I1018 17:50:10.464067   69488 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 17:50:10.483197   69488 system_pods.go:59] 26 kube-system pods found
	I1018 17:50:10.483289   69488 system_pods.go:61] "coredns-66bc5c9577-f6v2w" [a1fbdf00-9636-43a5-b1ed-a98bcacb5537] Running
	I1018 17:50:10.483312   69488 system_pods.go:61] "coredns-66bc5c9577-p7nbg" [9d361193-5b45-400e-8161-804fc30e7515] Running
	I1018 17:50:10.483343   69488 system_pods.go:61] "etcd-ha-181800" [3aafeb42-d09a-4b84-9739-e25adc3a4135] Running
	I1018 17:50:10.483363   69488 system_pods.go:61] "etcd-ha-181800-m02" [194d8d52-b9b6-43ae-8c1f-01b965d3ae96] Running
	I1018 17:50:10.483380   69488 system_pods.go:61] "etcd-ha-181800-m03" [f52cd0ee-6f99-49ba-8c4f-218b8d166fe2] Running
	I1018 17:50:10.483399   69488 system_pods.go:61] "kindnet-72mvm" [5edfc356-9d49-4895-b36a-06c2bd39155a] Running
	I1018 17:50:10.483417   69488 system_pods.go:61] "kindnet-86s8z" [6559ac9e-c73d-4d49-a0e1-87d630e5bec8] Running
	I1018 17:50:10.483439   69488 system_pods.go:61] "kindnet-88bv7" [3b3b9715-1e6e-4046-adae-f372381e068a] Running
	I1018 17:50:10.483466   69488 system_pods.go:61] "kindnet-9qbbw" [d1a305ed-4a0e-4ccc-90e0-04577ad4e5c4] Running
	I1018 17:50:10.483486   69488 system_pods.go:61] "kube-apiserver-ha-181800" [4966738e-d055-404d-82ad-0d3f23ef0337] Running
	I1018 17:50:10.483506   69488 system_pods.go:61] "kube-apiserver-ha-181800-m02" [344fc499-0c04-4f86-a919-3c2da1e7a1e6] Running
	I1018 17:50:10.483524   69488 system_pods.go:61] "kube-apiserver-ha-181800-m03" [ce72f944-adc2-46a9-a83c-dc75936c3e9c] Running
	I1018 17:50:10.483543   69488 system_pods.go:61] "kube-controller-manager-ha-181800" [9a4be61b-4ecc-46da-86a1-472b6da720b9] Running
	I1018 17:50:10.483573   69488 system_pods.go:61] "kube-controller-manager-ha-181800-m02" [6a519ce2-92dc-4003-8f1a-6d818fea6da3] Running
	I1018 17:50:10.483593   69488 system_pods.go:61] "kube-controller-manager-ha-181800-m03" [9d247c9d-37a0-4880-8b0a-1134ebb963ab] Running
	I1018 17:50:10.483612   69488 system_pods.go:61] "kube-proxy-dpwpn" [dfabd129-fc36-4d16-ab0f-0b9ecc015712] Running
	I1018 17:50:10.483630   69488 system_pods.go:61] "kube-proxy-fj4ww" [40c5681f-ad11-4e21-a852-5601e2a9fa6e] Running
	I1018 17:50:10.483648   69488 system_pods.go:61] "kube-proxy-qsqmb" [9e100b31-50e5-4d86-a234-0d6277009e98] Running
	I1018 17:50:10.483673   69488 system_pods.go:61] "kube-proxy-stgvm" [15b89226-91ae-478f-acfe-7841776b1377] Running
	I1018 17:50:10.483697   69488 system_pods.go:61] "kube-scheduler-ha-181800" [f4699386-754c-4fa2-8556-174d872d6825] Running
	I1018 17:50:10.483716   69488 system_pods.go:61] "kube-scheduler-ha-181800-m02" [565d55c5-9541-4ef9-a036-3d9ff03f0fa9] Running
	I1018 17:50:10.483733   69488 system_pods.go:61] "kube-scheduler-ha-181800-m03" [4f8687e4-3dbc-4c98-97a4-ab703b016798] Running
	I1018 17:50:10.483751   69488 system_pods.go:61] "kube-vip-ha-181800" [a947f5a9-6257-4ff0-9f73-2d720974668b] Running
	I1018 17:50:10.483784   69488 system_pods.go:61] "kube-vip-ha-181800-m02" [21258022-efed-42fb-b206-89ffcd8d3820] Running
	I1018 17:50:10.483812   69488 system_pods.go:61] "kube-vip-ha-181800-m03" [0087f776-5d07-4c43-906d-c63afc2cc349] Running
	I1018 17:50:10.483830   69488 system_pods.go:61] "storage-provisioner" [3c6521cd-8e1b-46aa-96a3-39e475e1426c] Running
	I1018 17:50:10.483848   69488 system_pods.go:74] duration metric: took 19.763103ms to wait for pod list to return data ...
	I1018 17:50:10.483877   69488 default_sa.go:34] waiting for default service account to be created ...
	I1018 17:50:10.493513   69488 default_sa.go:45] found service account: "default"
	I1018 17:50:10.493594   69488 default_sa.go:55] duration metric: took 9.697323ms for default service account to be created ...
	I1018 17:50:10.493625   69488 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 17:50:10.501353   69488 system_pods.go:86] 26 kube-system pods found
	I1018 17:50:10.501452   69488 system_pods.go:89] "coredns-66bc5c9577-f6v2w" [a1fbdf00-9636-43a5-b1ed-a98bcacb5537] Running
	I1018 17:50:10.501476   69488 system_pods.go:89] "coredns-66bc5c9577-p7nbg" [9d361193-5b45-400e-8161-804fc30e7515] Running
	I1018 17:50:10.501494   69488 system_pods.go:89] "etcd-ha-181800" [3aafeb42-d09a-4b84-9739-e25adc3a4135] Running
	I1018 17:50:10.501514   69488 system_pods.go:89] "etcd-ha-181800-m02" [194d8d52-b9b6-43ae-8c1f-01b965d3ae96] Running
	I1018 17:50:10.501540   69488 system_pods.go:89] "etcd-ha-181800-m03" [f52cd0ee-6f99-49ba-8c4f-218b8d166fe2] Running
	I1018 17:50:10.501560   69488 system_pods.go:89] "kindnet-72mvm" [5edfc356-9d49-4895-b36a-06c2bd39155a] Running
	I1018 17:50:10.501578   69488 system_pods.go:89] "kindnet-86s8z" [6559ac9e-c73d-4d49-a0e1-87d630e5bec8] Running
	I1018 17:50:10.501595   69488 system_pods.go:89] "kindnet-88bv7" [3b3b9715-1e6e-4046-adae-f372381e068a] Running
	I1018 17:50:10.501612   69488 system_pods.go:89] "kindnet-9qbbw" [d1a305ed-4a0e-4ccc-90e0-04577ad4e5c4] Running
	I1018 17:50:10.501639   69488 system_pods.go:89] "kube-apiserver-ha-181800" [4966738e-d055-404d-82ad-0d3f23ef0337] Running
	I1018 17:50:10.501660   69488 system_pods.go:89] "kube-apiserver-ha-181800-m02" [344fc499-0c04-4f86-a919-3c2da1e7a1e6] Running
	I1018 17:50:10.501677   69488 system_pods.go:89] "kube-apiserver-ha-181800-m03" [ce72f944-adc2-46a9-a83c-dc75936c3e9c] Running
	I1018 17:50:10.501694   69488 system_pods.go:89] "kube-controller-manager-ha-181800" [9a4be61b-4ecc-46da-86a1-472b6da720b9] Running
	I1018 17:50:10.501711   69488 system_pods.go:89] "kube-controller-manager-ha-181800-m02" [6a519ce2-92dc-4003-8f1a-6d818fea6da3] Running
	I1018 17:50:10.501737   69488 system_pods.go:89] "kube-controller-manager-ha-181800-m03" [9d247c9d-37a0-4880-8b0a-1134ebb963ab] Running
	I1018 17:50:10.501756   69488 system_pods.go:89] "kube-proxy-dpwpn" [dfabd129-fc36-4d16-ab0f-0b9ecc015712] Running
	I1018 17:50:10.501776   69488 system_pods.go:89] "kube-proxy-fj4ww" [40c5681f-ad11-4e21-a852-5601e2a9fa6e] Running
	I1018 17:50:10.501793   69488 system_pods.go:89] "kube-proxy-qsqmb" [9e100b31-50e5-4d86-a234-0d6277009e98] Running
	I1018 17:50:10.501809   69488 system_pods.go:89] "kube-proxy-stgvm" [15b89226-91ae-478f-acfe-7841776b1377] Running
	I1018 17:50:10.501836   69488 system_pods.go:89] "kube-scheduler-ha-181800" [f4699386-754c-4fa2-8556-174d872d6825] Running
	I1018 17:50:10.501855   69488 system_pods.go:89] "kube-scheduler-ha-181800-m02" [565d55c5-9541-4ef9-a036-3d9ff03f0fa9] Running
	I1018 17:50:10.501872   69488 system_pods.go:89] "kube-scheduler-ha-181800-m03" [4f8687e4-3dbc-4c98-97a4-ab703b016798] Running
	I1018 17:50:10.501889   69488 system_pods.go:89] "kube-vip-ha-181800" [a947f5a9-6257-4ff0-9f73-2d720974668b] Running
	I1018 17:50:10.501906   69488 system_pods.go:89] "kube-vip-ha-181800-m02" [21258022-efed-42fb-b206-89ffcd8d3820] Running
	I1018 17:50:10.501923   69488 system_pods.go:89] "kube-vip-ha-181800-m03" [0087f776-5d07-4c43-906d-c63afc2cc349] Running
	I1018 17:50:10.501939   69488 system_pods.go:89] "storage-provisioner" [3c6521cd-8e1b-46aa-96a3-39e475e1426c] Running
	I1018 17:50:10.501958   69488 system_pods.go:126] duration metric: took 8.313403ms to wait for k8s-apps to be running ...
	I1018 17:50:10.501982   69488 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 17:50:10.502072   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 17:50:10.521995   69488 system_svc.go:56] duration metric: took 20.005468ms WaitForService to wait for kubelet
	I1018 17:50:10.522064   69488 kubeadm.go:586] duration metric: took 10.458238282s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:50:10.522097   69488 node_conditions.go:102] verifying NodePressure condition ...
	I1018 17:50:10.529801   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:10.529839   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:10.529851   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:10.529856   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:10.529860   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:10.529864   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:10.529868   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:10.529873   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:10.529878   69488 node_conditions.go:105] duration metric: took 7.761413ms to run NodePressure ...
	I1018 17:50:10.529893   69488 start.go:241] waiting for startup goroutines ...
	I1018 17:50:10.529919   69488 start.go:255] writing updated cluster config ...
	I1018 17:50:10.533578   69488 out.go:203] 
	I1018 17:50:10.536806   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:10.536948   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:50:10.540446   69488 out.go:179] * Starting "ha-181800-m03" control-plane node in "ha-181800" cluster
	I1018 17:50:10.544213   69488 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:50:10.547247   69488 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:50:10.550234   69488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:50:10.550276   69488 cache.go:58] Caching tarball of preloaded images
	I1018 17:50:10.550383   69488 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:50:10.550399   69488 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:50:10.550572   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:50:10.550792   69488 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:50:10.581920   69488 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:50:10.581944   69488 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:50:10.581957   69488 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:50:10.581981   69488 start.go:360] acquireMachinesLock for ha-181800-m03: {Name:mk3bd15228a4ef4b7c016e23b190ad29deb5e3c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:50:10.582039   69488 start.go:364] duration metric: took 38.023µs to acquireMachinesLock for "ha-181800-m03"
	I1018 17:50:10.582062   69488 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:50:10.582068   69488 fix.go:54] fixHost starting: m03
	I1018 17:50:10.582331   69488 cli_runner.go:164] Run: docker container inspect ha-181800-m03 --format={{.State.Status}}
	I1018 17:50:10.604865   69488 fix.go:112] recreateIfNeeded on ha-181800-m03: state=Stopped err=<nil>
	W1018 17:50:10.604890   69488 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:50:10.607957   69488 out.go:252] * Restarting existing docker container for "ha-181800-m03" ...
	I1018 17:50:10.608050   69488 cli_runner.go:164] Run: docker start ha-181800-m03
	I1018 17:50:10.899418   69488 cli_runner.go:164] Run: docker container inspect ha-181800-m03 --format={{.State.Status}}
	I1018 17:50:10.926262   69488 kic.go:430] container "ha-181800-m03" state is running.
	I1018 17:50:10.926628   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m03
	I1018 17:50:10.950821   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:50:10.951066   69488 machine.go:93] provisionDockerMachine start ...
	I1018 17:50:10.951120   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:10.976987   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:10.977281   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1018 17:50:10.977290   69488 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:50:10.978264   69488 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 17:50:14.380761   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m03
	
	I1018 17:50:14.380788   69488 ubuntu.go:182] provisioning hostname "ha-181800-m03"
	I1018 17:50:14.380865   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:14.409115   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:14.409426   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1018 17:50:14.409441   69488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181800-m03 && echo "ha-181800-m03" | sudo tee /etc/hostname
	I1018 17:50:14.717264   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m03
	
	I1018 17:50:14.717353   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:14.739028   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:14.739335   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1018 17:50:14.739352   69488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181800-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181800-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181800-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:50:14.965850   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:50:14.965903   69488 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:50:14.965931   69488 ubuntu.go:190] setting up certificates
	I1018 17:50:14.965940   69488 provision.go:84] configureAuth start
	I1018 17:50:14.966014   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m03
	I1018 17:50:15.001400   69488 provision.go:143] copyHostCerts
	I1018 17:50:15.001447   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:50:15.001479   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:50:15.001492   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:50:15.001591   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:50:15.001685   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:50:15.001709   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:50:15.001717   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:50:15.001745   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:50:15.001793   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:50:15.001814   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:50:15.001822   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:50:15.001846   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:50:15.001898   69488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.ha-181800-m03 san=[127.0.0.1 192.168.49.4 ha-181800-m03 localhost minikube]
	I1018 17:50:15.478787   69488 provision.go:177] copyRemoteCerts
	I1018 17:50:15.478855   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:50:15.478897   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:15.499352   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m03/id_rsa Username:docker}
	I1018 17:50:15.670546   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 17:50:15.670610   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:50:15.737652   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 17:50:15.737722   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 17:50:15.785672   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 17:50:15.785736   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:50:15.819920   69488 provision.go:87] duration metric: took 853.956632ms to configureAuth
	I1018 17:50:15.819958   69488 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:50:15.820214   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:15.820332   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:15.865677   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:15.866025   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1018 17:50:15.866041   69488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:50:16.412687   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:50:16.412751   69488 machine.go:96] duration metric: took 5.461676033s to provisionDockerMachine
	I1018 17:50:16.412774   69488 start.go:293] postStartSetup for "ha-181800-m03" (driver="docker")
	I1018 17:50:16.412799   69488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:50:16.412889   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:50:16.413002   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:16.433582   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m03/id_rsa Username:docker}
	I1018 17:50:16.541794   69488 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:50:16.545653   69488 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:50:16.545679   69488 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:50:16.545690   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:50:16.545754   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:50:16.545831   69488 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:50:16.545837   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 17:50:16.545942   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 17:50:16.558126   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:50:16.579067   69488 start.go:296] duration metric: took 166.265226ms for postStartSetup
	I1018 17:50:16.579147   69488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:50:16.579196   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:16.607003   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m03/id_rsa Username:docker}
	I1018 17:50:16.710563   69488 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:50:16.715811   69488 fix.go:56] duration metric: took 6.133736189s for fixHost
	I1018 17:50:16.715839   69488 start.go:83] releasing machines lock for "ha-181800-m03", held for 6.133787135s
	I1018 17:50:16.715904   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m03
	I1018 17:50:16.738713   69488 out.go:179] * Found network options:
	I1018 17:50:16.742042   69488 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1018 17:50:16.745211   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:16.745257   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:16.745281   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:16.745291   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	I1018 17:50:16.745360   69488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:50:16.745415   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:16.745719   69488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:50:16.745787   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:16.786710   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m03/id_rsa Username:docker}
	I1018 17:50:16.789091   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m03/id_rsa Username:docker}
	I1018 17:50:17.000059   69488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:50:17.007334   69488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:50:17.007407   69488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:50:17.020749   69488 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:50:17.020771   69488 start.go:495] detecting cgroup driver to use...
	I1018 17:50:17.020801   69488 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:50:17.020860   69488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:50:17.040018   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:50:17.058499   69488 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:50:17.058565   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:50:17.088757   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:50:17.114857   69488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:50:17.279680   69488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:50:17.689048   69488 docker.go:234] disabling docker service ...
	I1018 17:50:17.689168   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:50:17.768854   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:50:17.797881   69488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:50:18.156314   69488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:50:18.369568   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:50:18.394137   69488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:50:18.428969   69488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:50:18.429103   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.447576   69488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:50:18.447692   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.482845   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.510376   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.531315   69488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:50:18.548495   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.563525   69488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.581424   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.594509   69488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:50:18.609129   69488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:50:18.621435   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:18.879315   69488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:50:19.151219   69488 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:50:19.151291   69488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:50:19.155163   69488 start.go:563] Will wait 60s for crictl version
	I1018 17:50:19.155231   69488 ssh_runner.go:195] Run: which crictl
	I1018 17:50:19.159144   69488 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:50:19.185150   69488 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:50:19.185237   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:50:19.215107   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:50:19.252641   69488 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:50:19.255663   69488 out.go:179]   - env NO_PROXY=192.168.49.2
	I1018 17:50:19.258473   69488 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1018 17:50:19.261365   69488 cli_runner.go:164] Run: docker network inspect ha-181800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:50:19.278013   69488 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:50:19.282046   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:50:19.291553   69488 mustload.go:65] Loading cluster: ha-181800
	I1018 17:50:19.291792   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:19.292044   69488 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:50:19.308345   69488 host.go:66] Checking if "ha-181800" exists ...
	I1018 17:50:19.308613   69488 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800 for IP: 192.168.49.4
	I1018 17:50:19.308629   69488 certs.go:195] generating shared ca certs ...
	I1018 17:50:19.308644   69488 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:50:19.308750   69488 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:50:19.308801   69488 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:50:19.308811   69488 certs.go:257] generating profile certs ...
	I1018 17:50:19.308888   69488 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key
	I1018 17:50:19.308994   69488 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.35e78fdb
	I1018 17:50:19.309039   69488 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key
	I1018 17:50:19.309051   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 17:50:19.309064   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 17:50:19.309079   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 17:50:19.309093   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 17:50:19.309106   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 17:50:19.309121   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 17:50:19.309132   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 17:50:19.309147   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 17:50:19.309202   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:50:19.309233   69488 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:50:19.309246   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:50:19.309272   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:50:19.309298   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:50:19.309353   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:50:19.309405   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:50:19.309436   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 17:50:19.309452   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:19.309465   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 17:50:19.309518   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:50:19.326970   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:50:19.425285   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1018 17:50:19.430205   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1018 17:50:19.438544   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1018 17:50:19.442194   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1018 17:50:19.450335   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1018 17:50:19.454272   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1018 17:50:19.462534   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1018 17:50:19.466318   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1018 17:50:19.475475   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1018 17:50:19.479138   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1018 17:50:19.487039   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1018 17:50:19.492406   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1018 17:50:19.511212   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:50:19.558261   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:50:19.590631   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:50:19.618816   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:50:19.644073   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 17:50:19.666879   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 17:50:19.688513   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 17:50:19.707989   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 17:50:19.736170   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:50:19.759883   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:50:19.781940   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:50:19.806805   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1018 17:50:19.820301   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1018 17:50:19.837237   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1018 17:50:19.852161   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1018 17:50:19.865774   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1018 17:50:19.879759   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1018 17:50:19.893543   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1018 17:50:19.907773   69488 ssh_runner.go:195] Run: openssl version
	I1018 17:50:19.914031   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:50:19.923464   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:50:19.928100   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:50:19.928198   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:50:19.970114   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:50:19.978890   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:50:19.987235   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:50:19.991041   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:50:19.991160   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:50:20.033052   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:50:20.042399   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:50:20.051218   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:20.055291   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:20.055383   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:20.097864   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:50:20.106870   69488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:50:20.111573   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 17:50:20.153811   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 17:50:20.195276   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 17:50:20.242865   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 17:50:20.284917   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 17:50:20.327528   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 17:50:20.380629   69488 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1018 17:50:20.380764   69488 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-181800-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:50:20.380810   69488 kube-vip.go:115] generating kube-vip config ...
	I1018 17:50:20.380884   69488 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 17:50:20.394557   69488 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:50:20.394614   69488 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 17:50:20.394671   69488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:50:20.404177   69488 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:50:20.404302   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1018 17:50:20.412251   69488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 17:50:20.425311   69488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:50:20.441214   69488 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 17:50:20.463677   69488 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 17:50:20.468015   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:50:20.478500   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:20.642164   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:50:20.673908   69488 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:50:20.674213   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:20.679253   69488 out.go:179] * Verifying Kubernetes components...
	I1018 17:50:20.682245   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:20.839086   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:50:20.854027   69488 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1018 17:50:20.854101   69488 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1018 17:50:20.854335   69488 node_ready.go:35] waiting up to 6m0s for node "ha-181800-m03" to be "Ready" ...
	W1018 17:50:22.857724   69488 node_ready.go:57] node "ha-181800-m03" has "Ready":"Unknown" status (will retry)
	W1018 17:50:24.858447   69488 node_ready.go:57] node "ha-181800-m03" has "Ready":"Unknown" status (will retry)
	W1018 17:50:26.858609   69488 node_ready.go:57] node "ha-181800-m03" has "Ready":"Unknown" status (will retry)
	W1018 17:50:29.359403   69488 node_ready.go:57] node "ha-181800-m03" has "Ready":"Unknown" status (will retry)
	W1018 17:50:31.859188   69488 node_ready.go:57] node "ha-181800-m03" has "Ready":"Unknown" status (will retry)
	W1018 17:50:34.358228   69488 node_ready.go:57] node "ha-181800-m03" has "Ready":"Unknown" status (will retry)
	I1018 17:50:34.857876   69488 node_ready.go:49] node "ha-181800-m03" is "Ready"
	I1018 17:50:34.857902   69488 node_ready.go:38] duration metric: took 14.003549338s for node "ha-181800-m03" to be "Ready" ...
	I1018 17:50:34.857914   69488 api_server.go:52] waiting for apiserver process to appear ...
	I1018 17:50:34.857973   69488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:50:34.869120   69488 api_server.go:72] duration metric: took 14.194796326s to wait for apiserver process to appear ...
	I1018 17:50:34.869149   69488 api_server.go:88] waiting for apiserver healthz status ...
	I1018 17:50:34.869170   69488 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 17:50:34.878933   69488 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 17:50:34.879871   69488 api_server.go:141] control plane version: v1.34.1
	I1018 17:50:34.879896   69488 api_server.go:131] duration metric: took 10.739864ms to wait for apiserver health ...
	I1018 17:50:34.879915   69488 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 17:50:34.886492   69488 system_pods.go:59] 26 kube-system pods found
	I1018 17:50:34.886536   69488 system_pods.go:61] "coredns-66bc5c9577-f6v2w" [a1fbdf00-9636-43a5-b1ed-a98bcacb5537] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:50:34.886578   69488 system_pods.go:61] "coredns-66bc5c9577-p7nbg" [9d361193-5b45-400e-8161-804fc30e7515] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:50:34.886593   69488 system_pods.go:61] "etcd-ha-181800" [3aafeb42-d09a-4b84-9739-e25adc3a4135] Running
	I1018 17:50:34.886598   69488 system_pods.go:61] "etcd-ha-181800-m02" [194d8d52-b9b6-43ae-8c1f-01b965d3ae96] Running
	I1018 17:50:34.886603   69488 system_pods.go:61] "etcd-ha-181800-m03" [f52cd0ee-6f99-49ba-8c4f-218b8d166fe2] Running
	I1018 17:50:34.886607   69488 system_pods.go:61] "kindnet-72mvm" [5edfc356-9d49-4895-b36a-06c2bd39155a] Running
	I1018 17:50:34.886622   69488 system_pods.go:61] "kindnet-86s8z" [6559ac9e-c73d-4d49-a0e1-87d630e5bec8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 17:50:34.886629   69488 system_pods.go:61] "kindnet-88bv7" [3b3b9715-1e6e-4046-adae-f372381e068a] Running
	I1018 17:50:34.886642   69488 system_pods.go:61] "kindnet-9qbbw" [d1a305ed-4a0e-4ccc-90e0-04577ad4e5c4] Running
	I1018 17:50:34.886646   69488 system_pods.go:61] "kube-apiserver-ha-181800" [4966738e-d055-404d-82ad-0d3f23ef0337] Running
	I1018 17:50:34.886650   69488 system_pods.go:61] "kube-apiserver-ha-181800-m02" [344fc499-0c04-4f86-a919-3c2da1e7a1e6] Running
	I1018 17:50:34.886654   69488 system_pods.go:61] "kube-apiserver-ha-181800-m03" [ce72f944-adc2-46a9-a83c-dc75936c3e9c] Running
	I1018 17:50:34.886659   69488 system_pods.go:61] "kube-controller-manager-ha-181800" [9a4be61b-4ecc-46da-86a1-472b6da720b9] Running
	I1018 17:50:34.886672   69488 system_pods.go:61] "kube-controller-manager-ha-181800-m02" [6a519ce2-92dc-4003-8f1a-6d818fea6da3] Running
	I1018 17:50:34.886679   69488 system_pods.go:61] "kube-controller-manager-ha-181800-m03" [9d247c9d-37a0-4880-8b0a-1134ebb963ab] Running
	I1018 17:50:34.886685   69488 system_pods.go:61] "kube-proxy-dpwpn" [dfabd129-fc36-4d16-ab0f-0b9ecc015712] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 17:50:34.886699   69488 system_pods.go:61] "kube-proxy-fj4ww" [40c5681f-ad11-4e21-a852-5601e2a9fa6e] Running
	I1018 17:50:34.886703   69488 system_pods.go:61] "kube-proxy-qsqmb" [9e100b31-50e5-4d86-a234-0d6277009e98] Running
	I1018 17:50:34.886707   69488 system_pods.go:61] "kube-proxy-stgvm" [15b89226-91ae-478f-acfe-7841776b1377] Running
	I1018 17:50:34.886714   69488 system_pods.go:61] "kube-scheduler-ha-181800" [f4699386-754c-4fa2-8556-174d872d6825] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 17:50:34.886723   69488 system_pods.go:61] "kube-scheduler-ha-181800-m02" [565d55c5-9541-4ef9-a036-3d9ff03f0fa9] Running
	I1018 17:50:34.886727   69488 system_pods.go:61] "kube-scheduler-ha-181800-m03" [4f8687e4-3dbc-4c98-97a4-ab703b016798] Running
	I1018 17:50:34.886732   69488 system_pods.go:61] "kube-vip-ha-181800" [a947f5a9-6257-4ff0-9f73-2d720974668b] Running
	I1018 17:50:34.886739   69488 system_pods.go:61] "kube-vip-ha-181800-m02" [21258022-efed-42fb-b206-89ffcd8d3820] Running
	I1018 17:50:34.886743   69488 system_pods.go:61] "kube-vip-ha-181800-m03" [0087f776-5d07-4c43-906d-c63afc2cc349] Running
	I1018 17:50:34.886747   69488 system_pods.go:61] "storage-provisioner" [3c6521cd-8e1b-46aa-96a3-39e475e1426c] Running
	I1018 17:50:34.886753   69488 system_pods.go:74] duration metric: took 6.831276ms to wait for pod list to return data ...
	I1018 17:50:34.886767   69488 default_sa.go:34] waiting for default service account to be created ...
	I1018 17:50:34.890059   69488 default_sa.go:45] found service account: "default"
	I1018 17:50:34.890090   69488 default_sa.go:55] duration metric: took 3.316408ms for default service account to be created ...
	I1018 17:50:34.890099   69488 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 17:50:34.899064   69488 system_pods.go:86] 26 kube-system pods found
	I1018 17:50:34.899114   69488 system_pods.go:89] "coredns-66bc5c9577-f6v2w" [a1fbdf00-9636-43a5-b1ed-a98bcacb5537] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:50:34.899126   69488 system_pods.go:89] "coredns-66bc5c9577-p7nbg" [9d361193-5b45-400e-8161-804fc30e7515] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:50:34.899135   69488 system_pods.go:89] "etcd-ha-181800" [3aafeb42-d09a-4b84-9739-e25adc3a4135] Running
	I1018 17:50:34.899145   69488 system_pods.go:89] "etcd-ha-181800-m02" [194d8d52-b9b6-43ae-8c1f-01b965d3ae96] Running
	I1018 17:50:34.899154   69488 system_pods.go:89] "etcd-ha-181800-m03" [f52cd0ee-6f99-49ba-8c4f-218b8d166fe2] Running
	I1018 17:50:34.899159   69488 system_pods.go:89] "kindnet-72mvm" [5edfc356-9d49-4895-b36a-06c2bd39155a] Running
	I1018 17:50:34.899172   69488 system_pods.go:89] "kindnet-86s8z" [6559ac9e-c73d-4d49-a0e1-87d630e5bec8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 17:50:34.899182   69488 system_pods.go:89] "kindnet-88bv7" [3b3b9715-1e6e-4046-adae-f372381e068a] Running
	I1018 17:50:34.899196   69488 system_pods.go:89] "kindnet-9qbbw" [d1a305ed-4a0e-4ccc-90e0-04577ad4e5c4] Running
	I1018 17:50:34.899202   69488 system_pods.go:89] "kube-apiserver-ha-181800" [4966738e-d055-404d-82ad-0d3f23ef0337] Running
	I1018 17:50:34.899213   69488 system_pods.go:89] "kube-apiserver-ha-181800-m02" [344fc499-0c04-4f86-a919-3c2da1e7a1e6] Running
	I1018 17:50:34.899223   69488 system_pods.go:89] "kube-apiserver-ha-181800-m03" [ce72f944-adc2-46a9-a83c-dc75936c3e9c] Running
	I1018 17:50:34.899228   69488 system_pods.go:89] "kube-controller-manager-ha-181800" [9a4be61b-4ecc-46da-86a1-472b6da720b9] Running
	I1018 17:50:34.899243   69488 system_pods.go:89] "kube-controller-manager-ha-181800-m02" [6a519ce2-92dc-4003-8f1a-6d818fea6da3] Running
	I1018 17:50:34.899249   69488 system_pods.go:89] "kube-controller-manager-ha-181800-m03" [9d247c9d-37a0-4880-8b0a-1134ebb963ab] Running
	I1018 17:50:34.899260   69488 system_pods.go:89] "kube-proxy-dpwpn" [dfabd129-fc36-4d16-ab0f-0b9ecc015712] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 17:50:34.899271   69488 system_pods.go:89] "kube-proxy-fj4ww" [40c5681f-ad11-4e21-a852-5601e2a9fa6e] Running
	I1018 17:50:34.899276   69488 system_pods.go:89] "kube-proxy-qsqmb" [9e100b31-50e5-4d86-a234-0d6277009e98] Running
	I1018 17:50:34.899281   69488 system_pods.go:89] "kube-proxy-stgvm" [15b89226-91ae-478f-acfe-7841776b1377] Running
	I1018 17:50:34.899294   69488 system_pods.go:89] "kube-scheduler-ha-181800" [f4699386-754c-4fa2-8556-174d872d6825] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 17:50:34.899303   69488 system_pods.go:89] "kube-scheduler-ha-181800-m02" [565d55c5-9541-4ef9-a036-3d9ff03f0fa9] Running
	I1018 17:50:34.899308   69488 system_pods.go:89] "kube-scheduler-ha-181800-m03" [4f8687e4-3dbc-4c98-97a4-ab703b016798] Running
	I1018 17:50:34.899312   69488 system_pods.go:89] "kube-vip-ha-181800" [a947f5a9-6257-4ff0-9f73-2d720974668b] Running
	I1018 17:50:34.899323   69488 system_pods.go:89] "kube-vip-ha-181800-m02" [21258022-efed-42fb-b206-89ffcd8d3820] Running
	I1018 17:50:34.899327   69488 system_pods.go:89] "kube-vip-ha-181800-m03" [0087f776-5d07-4c43-906d-c63afc2cc349] Running
	I1018 17:50:34.899331   69488 system_pods.go:89] "storage-provisioner" [3c6521cd-8e1b-46aa-96a3-39e475e1426c] Running
	I1018 17:50:34.899338   69488 system_pods.go:126] duration metric: took 9.233497ms to wait for k8s-apps to be running ...
	I1018 17:50:34.899350   69488 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 17:50:34.899417   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 17:50:34.917250   69488 system_svc.go:56] duration metric: took 17.889347ms WaitForService to wait for kubelet
	I1018 17:50:34.917280   69488 kubeadm.go:586] duration metric: took 14.242961018s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:50:34.917312   69488 node_conditions.go:102] verifying NodePressure condition ...
	I1018 17:50:34.921584   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:34.921618   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:34.921629   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:34.921635   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:34.921640   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:34.921644   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:34.921648   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:34.921652   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:34.921657   69488 node_conditions.go:105] duration metric: took 4.33997ms to run NodePressure ...
	I1018 17:50:34.921672   69488 start.go:241] waiting for startup goroutines ...
	I1018 17:50:34.921695   69488 start.go:255] writing updated cluster config ...
	I1018 17:50:34.925146   69488 out.go:203] 
	I1018 17:50:34.928178   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:34.928377   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:50:34.931719   69488 out.go:179] * Starting "ha-181800-m04" worker node in "ha-181800" cluster
	I1018 17:50:34.934625   69488 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:50:34.937723   69488 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:50:34.940621   69488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:50:34.940656   69488 cache.go:58] Caching tarball of preloaded images
	I1018 17:50:34.940709   69488 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:50:34.940775   69488 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:50:34.940787   69488 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:50:34.940923   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:50:34.962521   69488 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:50:34.962544   69488 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:50:34.962563   69488 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:50:34.962587   69488 start.go:360] acquireMachinesLock for ha-181800-m04: {Name:mkde4f18de8924439f6b0cc4435fbaf784c3faa2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:50:34.962654   69488 start.go:364] duration metric: took 47.016µs to acquireMachinesLock for "ha-181800-m04"
	I1018 17:50:34.962676   69488 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:50:34.962691   69488 fix.go:54] fixHost starting: m04
	I1018 17:50:34.962948   69488 cli_runner.go:164] Run: docker container inspect ha-181800-m04 --format={{.State.Status}}
	I1018 17:50:34.980810   69488 fix.go:112] recreateIfNeeded on ha-181800-m04: state=Stopped err=<nil>
	W1018 17:50:34.980838   69488 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:50:34.984164   69488 out.go:252] * Restarting existing docker container for "ha-181800-m04" ...
	I1018 17:50:34.984251   69488 cli_runner.go:164] Run: docker start ha-181800-m04
	I1018 17:50:35.315737   69488 cli_runner.go:164] Run: docker container inspect ha-181800-m04 --format={{.State.Status}}
	I1018 17:50:35.337160   69488 kic.go:430] container "ha-181800-m04" state is running.
	I1018 17:50:35.337590   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m04
	I1018 17:50:35.363433   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:50:35.363682   69488 machine.go:93] provisionDockerMachine start ...
	I1018 17:50:35.363737   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:35.394986   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:35.395304   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1018 17:50:35.395315   69488 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:50:35.396115   69488 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 17:50:38.582281   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m04
	
	I1018 17:50:38.582366   69488 ubuntu.go:182] provisioning hostname "ha-181800-m04"
	I1018 17:50:38.582470   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:38.612842   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:38.613162   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1018 17:50:38.613175   69488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181800-m04 && echo "ha-181800-m04" | sudo tee /etc/hostname
	I1018 17:50:38.824220   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m04
	
	I1018 17:50:38.824341   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:38.867678   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:38.867969   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1018 17:50:38.867985   69488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181800-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181800-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181800-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:50:39.054604   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:50:39.054689   69488 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:50:39.054718   69488 ubuntu.go:190] setting up certificates
	I1018 17:50:39.054753   69488 provision.go:84] configureAuth start
	I1018 17:50:39.054834   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m04
	I1018 17:50:39.086058   69488 provision.go:143] copyHostCerts
	I1018 17:50:39.086092   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:50:39.086123   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:50:39.086130   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:50:39.086205   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:50:39.086277   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:50:39.086294   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:50:39.086298   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:50:39.086323   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:50:39.086360   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:50:39.086376   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:50:39.086380   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:50:39.086403   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:50:39.086448   69488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.ha-181800-m04 san=[127.0.0.1 192.168.49.5 ha-181800-m04 localhost minikube]
	I1018 17:50:39.468879   69488 provision.go:177] copyRemoteCerts
	I1018 17:50:39.469042   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:50:39.469105   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:39.488386   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m04/id_rsa Username:docker}
	I1018 17:50:39.624142   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 17:50:39.624201   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:50:39.661469   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 17:50:39.661533   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 17:50:39.687551   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 17:50:39.687610   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:50:39.714808   69488 provision.go:87] duration metric: took 660.019137ms to configureAuth
	I1018 17:50:39.714833   69488 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:50:39.715059   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:39.715179   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:39.744352   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:39.744665   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1018 17:50:39.744680   69488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:50:40.169343   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:50:40.169451   69488 machine.go:96] duration metric: took 4.805759657s to provisionDockerMachine
	I1018 17:50:40.169476   69488 start.go:293] postStartSetup for "ha-181800-m04" (driver="docker")
	I1018 17:50:40.169509   69488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:50:40.169593   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:50:40.169660   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:40.199327   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m04/id_rsa Username:docker}
	I1018 17:50:40.309268   69488 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:50:40.313860   69488 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:50:40.313893   69488 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:50:40.313903   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:50:40.313963   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:50:40.314046   69488 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:50:40.314057   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 17:50:40.314164   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 17:50:40.322086   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:50:40.345649   69488 start.go:296] duration metric: took 176.137258ms for postStartSetup
	I1018 17:50:40.345726   69488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:50:40.345765   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:40.367346   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m04/id_rsa Username:docker}
	I1018 17:50:40.476066   69488 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:50:40.481571   69488 fix.go:56] duration metric: took 5.518874256s for fixHost
	I1018 17:50:40.481594   69488 start.go:83] releasing machines lock for "ha-181800-m04", held for 5.518929354s
	I1018 17:50:40.481667   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m04
	I1018 17:50:40.518678   69488 out.go:179] * Found network options:
	I1018 17:50:40.522829   69488 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1018 17:50:40.526545   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:40.526576   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:40.526587   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:40.526609   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:40.526619   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:40.526628   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	I1018 17:50:40.526702   69488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:50:40.526739   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:40.526991   69488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:50:40.527047   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:40.564877   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m04/id_rsa Username:docker}
	I1018 17:50:40.572778   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m04/id_rsa Username:docker}
	I1018 17:50:40.812088   69488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:50:40.818560   69488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:50:40.818643   69488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:50:40.827770   69488 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:50:40.827794   69488 start.go:495] detecting cgroup driver to use...
	I1018 17:50:40.827830   69488 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:50:40.827881   69488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:50:40.844762   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:50:40.859855   69488 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:50:40.859920   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:50:40.877123   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:50:40.901442   69488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:50:41.039508   69488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:50:41.185848   69488 docker.go:234] disabling docker service ...
	I1018 17:50:41.185936   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:50:41.204077   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:50:41.219382   69488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:50:41.421847   69488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:50:41.682651   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:50:41.704546   69488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:50:41.722306   69488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:50:41.722376   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.737444   69488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:50:41.737564   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.753240   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.765254   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.778891   69488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:50:41.788840   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.799676   69488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.810022   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.820591   69488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:50:41.828788   69488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:50:41.838483   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:41.972124   69488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:50:42.178891   69488 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:50:42.178980   69488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:50:42.184242   69488 start.go:563] Will wait 60s for crictl version
	I1018 17:50:42.184331   69488 ssh_runner.go:195] Run: which crictl
	I1018 17:50:42.191980   69488 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:50:42.224462   69488 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:50:42.224630   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:50:42.261636   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:50:42.307376   69488 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:50:42.310676   69488 out.go:179]   - env NO_PROXY=192.168.49.2
	I1018 17:50:42.313598   69488 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1018 17:50:42.316600   69488 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1018 17:50:42.319690   69488 cli_runner.go:164] Run: docker network inspect ha-181800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:50:42.337639   69488 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:50:42.341794   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:50:42.354387   69488 mustload.go:65] Loading cluster: ha-181800
	I1018 17:50:42.354632   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:42.354880   69488 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:50:42.375574   69488 host.go:66] Checking if "ha-181800" exists ...
	I1018 17:50:42.375851   69488 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800 for IP: 192.168.49.5
	I1018 17:50:42.375865   69488 certs.go:195] generating shared ca certs ...
	I1018 17:50:42.375878   69488 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:50:42.375994   69488 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:50:42.376039   69488 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:50:42.376053   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 17:50:42.376065   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 17:50:42.376082   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 17:50:42.376099   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 17:50:42.376158   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:50:42.376191   69488 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:50:42.376202   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:50:42.376227   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:50:42.376253   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:50:42.376280   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:50:42.376328   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:50:42.376359   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:42.376376   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 17:50:42.376390   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 17:50:42.376442   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:50:42.395447   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:50:42.416556   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:50:42.438126   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:50:42.461131   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:50:42.491460   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:50:42.516977   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:50:42.546320   69488 ssh_runner.go:195] Run: openssl version
	I1018 17:50:42.554579   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:50:42.566626   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:42.570900   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:42.570969   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:42.623862   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:50:42.634866   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:50:42.645108   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:50:42.655323   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:50:42.655394   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:50:42.704646   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:50:42.713644   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:50:42.722573   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:50:42.726769   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:50:42.726843   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:50:42.784245   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:50:42.792405   69488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:50:42.803513   69488 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 17:50:42.803579   69488 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.34.1 crio false true} ...
	I1018 17:50:42.803680   69488 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-181800-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:50:42.803759   69488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:50:42.812894   69488 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:50:42.813002   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1018 17:50:42.821266   69488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 17:50:42.839760   69488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:50:42.859184   69488 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 17:50:42.864035   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:50:42.875123   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:43.006572   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:50:43.022917   69488 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1018 17:50:43.023313   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:43.026393   69488 out.go:179] * Verifying Kubernetes components...
	I1018 17:50:43.029360   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:43.176018   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:50:43.195799   69488 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1018 17:50:43.195926   69488 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1018 17:50:43.196200   69488 node_ready.go:35] waiting up to 6m0s for node "ha-181800-m04" to be "Ready" ...
	W1018 17:50:45.201538   69488 node_ready.go:57] node "ha-181800-m04" has "Ready":"Unknown" status (will retry)
	W1018 17:50:47.702556   69488 node_ready.go:57] node "ha-181800-m04" has "Ready":"Unknown" status (will retry)
	W1018 17:50:50.201440   69488 node_ready.go:57] node "ha-181800-m04" has "Ready":"Unknown" status (will retry)
	I1018 17:50:50.700371   69488 node_ready.go:49] node "ha-181800-m04" is "Ready"
	I1018 17:50:50.700396   69488 node_ready.go:38] duration metric: took 7.50415906s for node "ha-181800-m04" to be "Ready" ...
	I1018 17:50:50.700408   69488 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 17:50:50.700467   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 17:50:50.718400   69488 system_svc.go:56] duration metric: took 17.984135ms WaitForService to wait for kubelet
	I1018 17:50:50.718432   69488 kubeadm.go:586] duration metric: took 7.695467215s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:50:50.718449   69488 node_conditions.go:102] verifying NodePressure condition ...
	I1018 17:50:50.722731   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:50.722761   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:50.722774   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:50.722779   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:50.722783   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:50.722787   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:50.722791   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:50.722795   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:50.722799   69488 node_conditions.go:105] duration metric: took 4.345599ms to run NodePressure ...
	I1018 17:50:50.722811   69488 start.go:241] waiting for startup goroutines ...
	I1018 17:50:50.722837   69488 start.go:255] writing updated cluster config ...
	I1018 17:50:50.723159   69488 ssh_runner.go:195] Run: rm -f paused
	I1018 17:50:50.727229   69488 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 17:50:50.727747   69488 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 17:50:50.750070   69488 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-f6v2w" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 17:50:52.756554   69488 pod_ready.go:104] pod "coredns-66bc5c9577-f6v2w" is not "Ready", error: <nil>
	W1018 17:50:54.757224   69488 pod_ready.go:104] pod "coredns-66bc5c9577-f6v2w" is not "Ready", error: <nil>
	I1018 17:50:55.872324   69488 pod_ready.go:94] pod "coredns-66bc5c9577-f6v2w" is "Ready"
	I1018 17:50:55.872348   69488 pod_ready.go:86] duration metric: took 5.122247372s for pod "coredns-66bc5c9577-f6v2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.872359   69488 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p7nbg" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.891895   69488 pod_ready.go:94] pod "coredns-66bc5c9577-p7nbg" is "Ready"
	I1018 17:50:55.891959   69488 pod_ready.go:86] duration metric: took 19.593189ms for pod "coredns-66bc5c9577-p7nbg" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.900138   69488 pod_ready.go:83] waiting for pod "etcd-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.913638   69488 pod_ready.go:94] pod "etcd-ha-181800" is "Ready"
	I1018 17:50:55.913660   69488 pod_ready.go:86] duration metric: took 13.499842ms for pod "etcd-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.913670   69488 pod_ready.go:83] waiting for pod "etcd-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.920519   69488 pod_ready.go:94] pod "etcd-ha-181800-m02" is "Ready"
	I1018 17:50:55.920596   69488 pod_ready.go:86] duration metric: took 6.91899ms for pod "etcd-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.920619   69488 pod_ready.go:83] waiting for pod "etcd-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.954930   69488 pod_ready.go:94] pod "etcd-ha-181800-m03" is "Ready"
	I1018 17:50:55.955010   69488 pod_ready.go:86] duration metric: took 34.368453ms for pod "etcd-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:56.150428   69488 request.go:683] "Waited before sending request" delay="195.256268ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1018 17:50:56.154502   69488 pod_ready.go:83] waiting for pod "kube-apiserver-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:56.350745   69488 request.go:683] "Waited before sending request" delay="196.132391ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181800"
	I1018 17:50:56.551187   69488 request.go:683] "Waited before sending request" delay="197.298856ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800"
	I1018 17:50:56.554146   69488 pod_ready.go:94] pod "kube-apiserver-ha-181800" is "Ready"
	I1018 17:50:56.554177   69488 pod_ready.go:86] duration metric: took 399.650322ms for pod "kube-apiserver-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:56.554188   69488 pod_ready.go:83] waiting for pod "kube-apiserver-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:56.750528   69488 request.go:683] "Waited before sending request" delay="196.269246ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181800-m02"
	I1018 17:50:56.951191   69488 request.go:683] "Waited before sending request" delay="191.312029ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m02"
	I1018 17:50:56.954528   69488 pod_ready.go:94] pod "kube-apiserver-ha-181800-m02" is "Ready"
	I1018 17:50:56.954555   69488 pod_ready.go:86] duration metric: took 400.360633ms for pod "kube-apiserver-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:56.954567   69488 pod_ready.go:83] waiting for pod "kube-apiserver-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:57.150777   69488 request.go:683] "Waited before sending request" delay="196.132408ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181800-m03"
	I1018 17:50:57.350632   69488 request.go:683] "Waited before sending request" delay="196.3256ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m03"
	I1018 17:50:57.354249   69488 pod_ready.go:94] pod "kube-apiserver-ha-181800-m03" is "Ready"
	I1018 17:50:57.354277   69488 pod_ready.go:86] duration metric: took 399.70318ms for pod "kube-apiserver-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:57.550692   69488 request.go:683] "Waited before sending request" delay="196.326346ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1018 17:50:57.554682   69488 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:57.750932   69488 request.go:683] "Waited before sending request" delay="196.156235ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181800"
	I1018 17:50:57.951083   69488 request.go:683] "Waited before sending request" delay="179.305539ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800"
	I1018 17:50:57.954373   69488 pod_ready.go:94] pod "kube-controller-manager-ha-181800" is "Ready"
	I1018 17:50:57.954402   69488 pod_ready.go:86] duration metric: took 399.688608ms for pod "kube-controller-manager-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:57.954412   69488 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:58.150687   69488 request.go:683] "Waited before sending request" delay="196.203982ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181800-m02"
	I1018 17:50:58.351259   69488 request.go:683] "Waited before sending request" delay="197.229423ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m02"
	I1018 17:50:58.354427   69488 pod_ready.go:94] pod "kube-controller-manager-ha-181800-m02" is "Ready"
	I1018 17:50:58.354451   69488 pod_ready.go:86] duration metric: took 400.032752ms for pod "kube-controller-manager-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:58.354461   69488 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:58.550867   69488 request.go:683] "Waited before sending request" delay="196.323713ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181800-m03"
	I1018 17:50:58.751164   69488 request.go:683] "Waited before sending request" delay="196.337531ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m03"
	I1018 17:50:58.754290   69488 pod_ready.go:94] pod "kube-controller-manager-ha-181800-m03" is "Ready"
	I1018 17:50:58.754318   69488 pod_ready.go:86] duration metric: took 399.850398ms for pod "kube-controller-manager-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:58.950697   69488 request.go:683] "Waited before sending request" delay="196.290137ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1018 17:50:58.954553   69488 pod_ready.go:83] waiting for pod "kube-proxy-dpwpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:59.150998   69488 request.go:683] "Waited before sending request" delay="196.346368ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dpwpn"
	I1018 17:50:59.350617   69488 request.go:683] "Waited before sending request" delay="195.289755ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m02"
	I1018 17:50:59.353848   69488 pod_ready.go:94] pod "kube-proxy-dpwpn" is "Ready"
	I1018 17:50:59.353878   69488 pod_ready.go:86] duration metric: took 399.293025ms for pod "kube-proxy-dpwpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:59.353888   69488 pod_ready.go:83] waiting for pod "kube-proxy-fj4ww" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:59.550367   69488 request.go:683] "Waited before sending request" delay="196.374503ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fj4ww"
	I1018 17:50:59.751156   69488 request.go:683] "Waited before sending request" delay="197.148429ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m04"
	I1018 17:50:59.754407   69488 pod_ready.go:94] pod "kube-proxy-fj4ww" is "Ready"
	I1018 17:50:59.754437   69488 pod_ready.go:86] duration metric: took 400.541386ms for pod "kube-proxy-fj4ww" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:59.754446   69488 pod_ready.go:83] waiting for pod "kube-proxy-qsqmb" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:59.950755   69488 request.go:683] "Waited before sending request" delay="196.237656ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qsqmb"
	I1018 17:51:00.158458   69488 request.go:683] "Waited before sending request" delay="204.154018ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m03"
	I1018 17:51:00.170490   69488 pod_ready.go:94] pod "kube-proxy-qsqmb" is "Ready"
	I1018 17:51:00.170526   69488 pod_ready.go:86] duration metric: took 416.072575ms for pod "kube-proxy-qsqmb" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:00.170537   69488 pod_ready.go:83] waiting for pod "kube-proxy-stgvm" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:00.350837   69488 request.go:683] "Waited before sending request" delay="180.202158ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-stgvm"
	I1018 17:51:00.550600   69488 request.go:683] "Waited before sending request" delay="195.396062ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800"
	I1018 17:51:00.553989   69488 pod_ready.go:94] pod "kube-proxy-stgvm" is "Ready"
	I1018 17:51:00.554026   69488 pod_ready.go:86] duration metric: took 383.481925ms for pod "kube-proxy-stgvm" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:00.750322   69488 request.go:683] "Waited before sending request" delay="196.164105ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1018 17:51:00.754581   69488 pod_ready.go:83] waiting for pod "kube-scheduler-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:00.951090   69488 request.go:683] "Waited before sending request" delay="196.343135ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181800"
	I1018 17:51:01.151207   69488 request.go:683] "Waited before sending request" delay="196.368472ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800"
	I1018 17:51:01.154780   69488 pod_ready.go:94] pod "kube-scheduler-ha-181800" is "Ready"
	I1018 17:51:01.154809   69488 pod_ready.go:86] duration metric: took 400.156865ms for pod "kube-scheduler-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:01.154820   69488 pod_ready.go:83] waiting for pod "kube-scheduler-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:01.351014   69488 request.go:683] "Waited before sending request" delay="196.125229ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181800-m02"
	I1018 17:51:01.550334   69488 request.go:683] "Waited before sending request" delay="195.254374ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m02"
	I1018 17:51:01.553462   69488 pod_ready.go:94] pod "kube-scheduler-ha-181800-m02" is "Ready"
	I1018 17:51:01.553533   69488 pod_ready.go:86] duration metric: took 398.706213ms for pod "kube-scheduler-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:01.553558   69488 pod_ready.go:83] waiting for pod "kube-scheduler-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:01.750793   69488 request.go:683] "Waited before sending request" delay="197.139116ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181800-m03"
	I1018 17:51:01.951100   69488 request.go:683] "Waited before sending request" delay="196.302232ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m03"
	I1018 17:51:01.954435   69488 pod_ready.go:94] pod "kube-scheduler-ha-181800-m03" is "Ready"
	I1018 17:51:01.954463   69488 pod_ready.go:86] duration metric: took 400.885736ms for pod "kube-scheduler-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:01.954476   69488 pod_ready.go:40] duration metric: took 11.227212191s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 17:51:02.019798   69488 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 17:51:02.023234   69488 out.go:179] * Done! kubectl is now configured to use "ha-181800" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.572124206Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3818bf02-e1ec-45e5-8db2-98e9f6e8000a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.573451845Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=bdb883a0-d1f7-44fb-bec3-c90a1d2ecb55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.573727681Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.584989537Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.585193183Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/87a35d3c6fccfe095ac3771dcbde81fc5df65bc9200469d9386fd64ba3708913/merged/etc/passwd: no such file or directory"
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.585221163Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/87a35d3c6fccfe095ac3771dcbde81fc5df65bc9200469d9386fd64ba3708913/merged/etc/group: no such file or directory"
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.585494192Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.609702849Z" level=info msg="Created container 3955a976d16cdd5db102930c28bfc2c48f3fd22d0d8f4186e30edecd860f23fd: kube-system/storage-provisioner/storage-provisioner" id=bdb883a0-d1f7-44fb-bec3-c90a1d2ecb55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.610857892Z" level=info msg="Starting container: 3955a976d16cdd5db102930c28bfc2c48f3fd22d0d8f4186e30edecd860f23fd" id=4f969c9f-8845-4412-b24f-e780eb6068e8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.615041848Z" level=info msg="Started container" PID=1488 containerID=3955a976d16cdd5db102930c28bfc2c48f3fd22d0d8f4186e30edecd860f23fd description=kube-system/storage-provisioner/storage-provisioner id=4f969c9f-8845-4412-b24f-e780eb6068e8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9d76fad66ab674fdb6d96a586ff07b63771e9f80ffb0da6d960f75270994737e
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.473504065Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.479286252Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.479449553Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.479659115Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.500865649Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.502400176Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.502551702Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.511806492Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.511960258Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.51203262Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.515388889Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.515422391Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.515444882Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.526060264Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.526097122Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	3955a976d16cd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Running             storage-provisioner       3                   9d76fad66ab67       storage-provisioner                 kube-system
	b70649f38d4c7       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   2 minutes ago        Running             busybox                   2                   2d6e6e05d930c       busybox-7b57f96db7-fbwpv            default
	244a77fe1563d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   2 minutes ago        Running             coredns                   2                   ac0ef71240719       coredns-66bc5c9577-p7nbg            kube-system
	45c33b76be4e1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 minutes ago        Running             kindnet-cni               2                   0e97ce88bd2d3       kindnet-72mvm                       kube-system
	8aea864f19933       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 minutes ago        Running             kube-proxy                2                   c1b0887367928       kube-proxy-stgvm                    kube-system
	6d80af764ee06       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   2 minutes ago        Running             coredns                   2                   ed23b1fbdbbb3       coredns-66bc5c9577-f6v2w            kube-system
	f2f15c809753a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   2 minutes ago        Exited              storage-provisioner       2                   9d76fad66ab67       storage-provisioner                 kube-system
	4cff6e37b85af       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   2 minutes ago        Running             kube-controller-manager   8                   c14a7cc20dbd7       kube-controller-manager-ha-181800   kube-system
	787ba7d1db588       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   2 minutes ago        Running             kube-apiserver            8                   aedac42fff114       kube-apiserver-ha-181800            kube-system
	bd6f9d7be6037       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   3 minutes ago        Exited              kube-controller-manager   7                   c14a7cc20dbd7       kube-controller-manager-ha-181800   kube-system
	7df0159a16497       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   3 minutes ago        Exited              kube-apiserver            7                   aedac42fff114       kube-apiserver-ha-181800            kube-system
	8d49f8f056288       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   4 minutes ago        Running             etcd                      2                   c5458ae9aa01d       etcd-ha-181800                      kube-system
	42139c5070f82       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   4 minutes ago        Running             kube-vip                  1                   ac5de0631c6c9       kube-vip-ha-181800                  kube-system
	fb83e2f9880f4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   4 minutes ago        Running             kube-scheduler            2                   042db5c7b2fa5       kube-scheduler-ha-181800            kube-system
	
	
	==> coredns [244a77fe1563d266b1c09476ad0f3463ffeb31f96c85ba703ffe04a24a967497] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42812 - 40298 "HINFO IN 6519948929031597716.8341788919287889456. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016440056s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [6d80af764ee0602bdd0407c66fcc9de24c8b7b254f4ce667725e048906d15a87] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35970 - 34760 "HINFO IN 4620377952315927478.2937315152384107880. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029628682s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-181800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T17_33_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:33:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:52:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 17:52:33 +0000   Sat, 18 Oct 2025 17:33:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 17:52:33 +0000   Sat, 18 Oct 2025 17:33:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 17:52:33 +0000   Sat, 18 Oct 2025 17:33:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 17:52:33 +0000   Sat, 18 Oct 2025 17:34:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-181800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                7dc9b150-98ed-4d4d-b680-5759a1e067a9
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-fbwpv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 coredns-66bc5c9577-f6v2w             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     19m
	  kube-system                 coredns-66bc5c9577-p7nbg             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     19m
	  kube-system                 etcd-ha-181800                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         19m
	  kube-system                 kindnet-72mvm                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-ha-181800             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-181800    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-stgvm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-181800             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-181800                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 19m                    kube-proxy       
	  Normal   Starting                 2m26s                  kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  19m (x8 over 19m)      kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m (x8 over 19m)      kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19m (x8 over 19m)      kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     19m                    kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   Starting                 19m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 19m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  19m                    kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m                    kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           19m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   RegisteredNode           18m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   NodeReady                18m                    kubelet          Node ha-181800 status is now: NodeReady
	  Normal   RegisteredNode           17m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)      kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           10m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   Starting                 4m25s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m25s (x8 over 4m25s)  kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m25s (x8 over 4m25s)  kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m25s (x8 over 4m25s)  kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m30s                  node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   RegisteredNode           2m25s                  node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   RegisteredNode           2m1s                   node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   RegisteredNode           54s                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	
	
	Name:               ha-181800-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T17_34_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:34:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:52:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 17:51:10 +0000   Sat, 18 Oct 2025 17:50:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 17:51:10 +0000   Sat, 18 Oct 2025 17:50:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 17:51:10 +0000   Sat, 18 Oct 2025 17:50:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 17:51:10 +0000   Sat, 18 Oct 2025 17:50:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-181800-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                b2dd8f24-78e0-4eba-8b0c-b12412f7af7d
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-cp9q6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 etcd-ha-181800-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         18m
	  kube-system                 kindnet-86s8z                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      18m
	  kube-system                 kube-apiserver-ha-181800-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-ha-181800-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-dpwpn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-ha-181800-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-vip-ha-181800-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 18m                    kube-proxy       
	  Normal   Starting                 117s                   kube-proxy       
	  Normal   RegisteredNode           18m                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           18m                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-181800-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  15m (x9 over 15m)      kubelet          Node ha-181800-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-181800-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeNotReady             14m                    node-controller  Node ha-181800-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        14m                    kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           13m                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   NodeNotReady             10m                    node-controller  Node ha-181800-m02 status is now: NodeNotReady
	  Warning  CgroupV1                 4m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m23s (x8 over 4m23s)  kubelet          Node ha-181800-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m23s (x8 over 4m23s)  kubelet          Node ha-181800-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m23s (x8 over 4m23s)  kubelet          Node ha-181800-m02 status is now: NodeHasSufficientPID
	  Warning  ContainerGCFailed        3m23s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m30s                  node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           2m25s                  node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           2m1s                   node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           54s                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	
	
	Name:               ha-181800-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T17_35_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:35:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:52:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 17:51:35 +0000   Sat, 18 Oct 2025 17:50:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 17:51:35 +0000   Sat, 18 Oct 2025 17:50:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 17:51:35 +0000   Sat, 18 Oct 2025 17:50:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 17:51:35 +0000   Sat, 18 Oct 2025 17:50:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-181800-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                4a1abf8a-63a3-4737-81ec-1878616c489b
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-lzcbm                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 etcd-ha-181800-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-9qbbw                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-ha-181800-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-181800-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-qsqmb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-181800-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-181800-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 2m4s                   kube-proxy       
	  Normal   RegisteredNode           17m                    node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   NodeNotReady             10m                    node-controller  Node ha-181800-m03 status is now: NodeNotReady
	  Normal   RegisteredNode           2m31s                  node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Warning  CgroupV1                 2m30s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 2m30s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m30s (x8 over 2m30s)  kubelet          Node ha-181800-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m30s (x8 over 2m30s)  kubelet          Node ha-181800-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m30s (x8 over 2m30s)  kubelet          Node ha-181800-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m26s                  node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   RegisteredNode           2m2s                   node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   RegisteredNode           55s                    node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	
	
	Name:               ha-181800-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T17_36_11_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:36:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:52:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 17:52:42 +0000   Sat, 18 Oct 2025 17:50:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 17:52:42 +0000   Sat, 18 Oct 2025 17:50:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 17:52:42 +0000   Sat, 18 Oct 2025 17:50:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 17:52:42 +0000   Sat, 18 Oct 2025 17:50:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-181800-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                afc79373-b3a1-4495-8f28-5c3685ad131e
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-88bv7       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-proxy-fj4ww    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 16m                  kube-proxy       
	  Normal   Starting                 105s                 kube-proxy       
	  Normal   Starting                 16m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     16m (x3 over 16m)    kubelet          Node ha-181800-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x3 over 16m)    kubelet          Node ha-181800-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  16m (x3 over 16m)    kubelet          Node ha-181800-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           16m                  node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           16m                  node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           16m                  node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   NodeReady                15m                  kubelet          Node ha-181800-m04 status is now: NodeReady
	  Normal   RegisteredNode           13m                  node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           10m                  node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   NodeNotReady             10m                  node-controller  Node ha-181800-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           2m31s                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           2m26s                node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Warning  CgroupV1                 2m6s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 2m6s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m3s (x8 over 2m6s)  kubelet          Node ha-181800-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m3s (x8 over 2m6s)  kubelet          Node ha-181800-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m3s (x8 over 2m6s)  kubelet          Node ha-181800-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m2s                 node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           55s                  node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	
	
	Name:               ha-181800-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T17_51_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:51:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800-m05
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:52:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 17:52:39 +0000   Sat, 18 Oct 2025 17:51:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 17:52:39 +0000   Sat, 18 Oct 2025 17:51:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 17:52:39 +0000   Sat, 18 Oct 2025 17:51:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 17:52:39 +0000   Sat, 18 Oct 2025 17:52:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.6
	  Hostname:    ha-181800-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                79f1696c-3016-4cac-b220-5cfbf18101cc
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-181800-m05                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         46s
	  kube-system                 kindnet-mtzkz                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-ha-181800-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-controller-manager-ha-181800-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-proxy-7xkff                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-scheduler-ha-181800-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-vip-ha-181800-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        41s   kube-proxy       
	  Normal  RegisteredNode  52s   node-controller  Node ha-181800-m05 event: Registered Node ha-181800-m05 in Controller
	  Normal  RegisteredNode  51s   node-controller  Node ha-181800-m05 event: Registered Node ha-181800-m05 in Controller
	  Normal  RegisteredNode  51s   node-controller  Node ha-181800-m05 event: Registered Node ha-181800-m05 in Controller
	  Normal  RegisteredNode  50s   node-controller  Node ha-181800-m05 event: Registered Node ha-181800-m05 in Controller
	
	
	==> dmesg <==
	[Oct18 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014995] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035776] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.808632] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.418900] kauditd_printk_skb: 36 callbacks suppressed
	[Oct18 17:12] overlayfs: idmapped layers are currently not supported
	[  +0.082393] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct18 17:18] overlayfs: idmapped layers are currently not supported
	[Oct18 17:19] overlayfs: idmapped layers are currently not supported
	[Oct18 17:33] overlayfs: idmapped layers are currently not supported
	[ +35.716082] overlayfs: idmapped layers are currently not supported
	[Oct18 17:35] overlayfs: idmapped layers are currently not supported
	[Oct18 17:36] overlayfs: idmapped layers are currently not supported
	[Oct18 17:37] overlayfs: idmapped layers are currently not supported
	[Oct18 17:39] overlayfs: idmapped layers are currently not supported
	[  +3.088699] overlayfs: idmapped layers are currently not supported
	[Oct18 17:48] overlayfs: idmapped layers are currently not supported
	[  +2.594489] overlayfs: idmapped layers are currently not supported
	[Oct18 17:50] overlayfs: idmapped layers are currently not supported
	[ +42.240353] overlayfs: idmapped layers are currently not supported
	[Oct18 17:51] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8d49f8f05628805a90b3d99b19810fe13d13747bb11c8daf730344aef4d339f6] <==
	{"level":"info","ts":"2025-10-18T17:51:37.693855Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"b3fe458a773b8b53"}
	{"level":"info","ts":"2025-10-18T17:51:37.819629Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3fe458a773b8b53"}
	{"level":"info","ts":"2025-10-18T17:51:37.820697Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b3fe458a773b8b53","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-10-18T17:51:37.820770Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3fe458a773b8b53"}
	{"level":"info","ts":"2025-10-18T17:51:37.820816Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3fe458a773b8b53"}
	{"level":"info","ts":"2025-10-18T17:51:37.870200Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3fe458a773b8b53"}
	{"level":"info","ts":"2025-10-18T17:51:37.886047Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b3fe458a773b8b53","stream-type":"stream Message"}
	{"level":"info","ts":"2025-10-18T17:51:37.886138Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3fe458a773b8b53"}
	{"level":"info","ts":"2025-10-18T17:51:49.723288Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-10-18T17:51:50.425890Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-10-18T17:51:50.566233Z","caller":"traceutil/trace.go:172","msg":"trace[1582315109] linearizableReadLoop","detail":"{readStateIndex:4544; appliedIndex:4547; }","duration":"129.885952ms","start":"2025-10-18T17:51:50.436305Z","end":"2025-10-18T17:51:50.566191Z","steps":["trace[1582315109] 'read index received'  (duration: 129.876918ms)","trace[1582315109] 'applied index is now lower than readState.Index'  (duration: 7.918µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T17:51:50.566502Z","caller":"traceutil/trace.go:172","msg":"trace[2120789788] transaction","detail":"{read_only:false; number_of_response:1; response_revision:3873; }","duration":"106.818583ms","start":"2025-10-18T17:51:50.459672Z","end":"2025-10-18T17:51:50.566491Z","steps":["trace[2120789788] 'process raft request'  (duration: 10.606923ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T17:51:50.568362Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"149.520857ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-hmlzh\" limit:1 ","response":"range_response_count:1 size:3816"}
	{"level":"info","ts":"2025-10-18T17:51:50.569616Z","caller":"traceutil/trace.go:172","msg":"trace[486774353] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-hmlzh; range_end:; response_count:1; response_revision:3873; }","duration":"150.782757ms","start":"2025-10-18T17:51:50.418818Z","end":"2025-10-18T17:51:50.569601Z","steps":["trace[486774353] 'agreement among raft nodes before linearized reading'  (duration: 149.204012ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T17:51:50.570055Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.847465ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-vlnvb\" limit:1 ","response":"range_response_count:1 size:4085"}
	{"level":"info","ts":"2025-10-18T17:51:50.570504Z","caller":"traceutil/trace.go:172","msg":"trace[396383788] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-vlnvb; range_end:; response_count:1; response_revision:3873; }","duration":"154.944812ms","start":"2025-10-18T17:51:50.415188Z","end":"2025-10-18T17:51:50.570133Z","steps":["trace[396383788] 'agreement among raft nodes before linearized reading'  (duration: 154.739353ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T17:51:50.571391Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.201124ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-n8km8\" limit:1 ","response":"range_response_count:1 size:4073"}
	{"level":"info","ts":"2025-10-18T17:51:50.571488Z","caller":"traceutil/trace.go:172","msg":"trace[2053080079] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-n8km8; range_end:; response_count:1; response_revision:3873; }","duration":"156.308695ms","start":"2025-10-18T17:51:50.415167Z","end":"2025-10-18T17:51:50.571476Z","steps":["trace[2053080079] 'agreement among raft nodes before linearized reading'  (duration: 156.020116ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T17:51:50.596699Z","caller":"traceutil/trace.go:172","msg":"trace[1611105105] transaction","detail":"{read_only:false; number_of_response:1; response_revision:3874; }","duration":"135.872917ms","start":"2025-10-18T17:51:50.460812Z","end":"2025-10-18T17:51:50.596685Z","steps":["trace[1611105105] 'process raft request'  (duration: 135.612943ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T17:51:50.600504Z","caller":"traceutil/trace.go:172","msg":"trace[1690400551] transaction","detail":"{read_only:false; number_of_response:1; response_revision:3875; }","duration":"138.675645ms","start":"2025-10-18T17:51:50.461815Z","end":"2025-10-18T17:51:50.600491Z","steps":["trace[1690400551] 'process raft request'  (duration: 138.434789ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T17:51:50.610397Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.891213ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T17:51:50.616155Z","caller":"traceutil/trace.go:172","msg":"trace[509590566] range","detail":"{range_begin:/registry/replicasets; range_end:; response_count:0; response_revision:3882; }","duration":"150.960039ms","start":"2025-10-18T17:51:50.459486Z","end":"2025-10-18T17:51:50.610446Z","steps":["trace[509590566] 'agreement among raft nodes before linearized reading'  (duration: 150.875221ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T17:51:59.350226Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-10-18T17:52:00.580917Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-10-18T17:52:07.308970Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"b3fe458a773b8b53","bytes":7190412,"size":"7.2 MB","took":"31.744859176s"}
	
	
	==> kernel <==
	 17:52:43 up  1:35,  0 user,  load average: 5.94, 3.42, 1.93
	Linux ha-181800 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [45c33b76be4e1c5e61c683306b76aeb0fcbfda863ba2562aee4d85f222728470] <==
	I1018 17:52:14.473840       1 main.go:324] Node ha-181800-m05 has CIDR [10.244.4.0/24] 
	I1018 17:52:14.474579       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:52:14.474656       1 main.go:301] handling current node
	I1018 17:52:14.474695       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 17:52:14.474730       1 main.go:324] Node ha-181800-m02 has CIDR [10.244.1.0/24] 
	I1018 17:52:24.479000       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1018 17:52:24.479103       1 main.go:324] Node ha-181800-m05 has CIDR [10.244.4.0/24] 
	I1018 17:52:24.479269       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:52:24.479323       1 main.go:301] handling current node
	I1018 17:52:24.479361       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 17:52:24.479387       1 main.go:324] Node ha-181800-m02 has CIDR [10.244.1.0/24] 
	I1018 17:52:24.479480       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1018 17:52:24.479516       1 main.go:324] Node ha-181800-m03 has CIDR [10.244.2.0/24] 
	I1018 17:52:24.479617       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 17:52:24.479650       1 main.go:324] Node ha-181800-m04 has CIDR [10.244.3.0/24] 
	I1018 17:52:34.471311       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:52:34.471423       1 main.go:301] handling current node
	I1018 17:52:34.471448       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 17:52:34.471456       1 main.go:324] Node ha-181800-m02 has CIDR [10.244.1.0/24] 
	I1018 17:52:34.471625       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1018 17:52:34.471638       1 main.go:324] Node ha-181800-m03 has CIDR [10.244.2.0/24] 
	I1018 17:52:34.471702       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 17:52:34.471716       1 main.go:324] Node ha-181800-m04 has CIDR [10.244.3.0/24] 
	I1018 17:52:34.471772       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1018 17:52:34.471777       1 main.go:324] Node ha-181800-m05 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [787ba7d1db5885d5987b39cc564271b65d0c3534789595970e69e1fc2af692fa] <==
	I1018 17:50:08.637365       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 17:50:08.648586       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 17:50:08.649478       1 aggregator.go:171] initial CRD sync complete...
	I1018 17:50:08.658365       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 17:50:08.658478       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 17:50:08.658528       1 cache.go:39] Caches are synced for autoregister controller
	I1018 17:50:08.648742       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 17:50:08.660408       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 17:50:08.685820       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 17:50:08.685952       1 policy_source.go:240] refreshing policies
	I1018 17:50:08.705489       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1018 17:50:08.711819       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 17:50:08.721543       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 17:50:08.729935       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 17:50:08.730318       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 17:50:08.730492       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 17:50:08.730520       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 17:50:08.730960       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 17:50:08.746648       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 17:50:08.747504       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 17:50:09.243989       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 17:50:13.235609       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 17:50:36.709527       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 17:50:36.815877       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 17:50:46.351258       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [7df0159a16497989a32ac40623e8901229679b8716e6b590b84a0d3e1054f4d6] <==
	I1018 17:49:21.128362       1 server.go:150] Version: v1.34.1
	I1018 17:49:21.128401       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1018 17:49:22.017042       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1018 17:49:22.017075       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1018 17:49:22.017084       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1018 17:49:22.017089       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1018 17:49:22.017094       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1018 17:49:22.017098       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1018 17:49:22.017103       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1018 17:49:22.017107       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1018 17:49:22.017111       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1018 17:49:22.017116       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1018 17:49:22.017120       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1018 17:49:22.017125       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1018 17:49:22.035548       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 17:49:22.037326       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1018 17:49:22.037937       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1018 17:49:22.044391       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 17:49:22.056396       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1018 17:49:22.056496       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1018 17:49:22.056813       1 instance.go:239] Using reconciler: lease
	W1018 17:49:22.058127       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 17:49:42.034705       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 17:49:42.036960       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1018 17:49:42.058557       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [4cff6e37b85af70621f4b47faf3b854223fcae935be9ad45a9a99a523f33574b] <==
	I1018 17:50:17.475715       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 17:50:17.477471       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 17:50:17.478740       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800-m03"
	I1018 17:50:17.478810       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800-m04"
	I1018 17:50:17.478834       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800"
	I1018 17:50:17.478868       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800-m02"
	I1018 17:50:17.479116       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 17:50:17.483521       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 17:50:17.491656       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 17:50:17.491691       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 17:50:17.491699       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 17:50:17.491580       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 17:50:17.503394       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 17:50:17.508362       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 17:50:17.509154       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 17:50:50.411726       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-181800-m04"
	I1018 17:50:55.780269       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-kgtwl EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-kgtwl\": the object has been modified; please apply your changes to the latest version and try again"
	I1018 17:50:55.782431       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"9f28e5d3-f804-46e7-b8a3-f9f96165b245", APIVersion:"v1", ResourceVersion:"306", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-kgtwl EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-kgtwl": the object has been modified; please apply your changes to the latest version and try again
	E1018 17:50:55.860481       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1018 17:51:48.671222       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-9lnm7 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-9lnm7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1018 17:51:49.455195       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-181800-m05\" does not exist"
	I1018 17:51:49.455335       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-181800-m04"
	I1018 17:51:49.522527       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-181800-m05" podCIDRs=["10.244.4.0/24"]
	I1018 17:51:52.547695       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800-m05"
	I1018 17:52:39.400298       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-181800-m04"
	
	
	==> kube-controller-manager [bd6f9d7be603729a0a5200b910dc4c63002c84e58b83cb98debb890cf0bf202d] <==
	I1018 17:49:24.964069       1 serving.go:386] Generated self-signed cert in-memory
	I1018 17:49:25.434782       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1018 17:49:25.434808       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 17:49:25.436324       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 17:49:25.436542       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1018 17:49:25.436706       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1018 17:49:25.436723       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 17:49:45.439754       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-proxy [8aea864f19933a28597488b60aa422e08bea2bfd07e84bd2fec57087062dc95f] <==
	I1018 17:50:15.663641       1 server_linux.go:53] "Using iptables proxy"
	I1018 17:50:16.334903       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 17:50:16.464013       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 17:50:16.464050       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 17:50:16.464138       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 17:50:16.493669       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 17:50:16.493728       1 server_linux.go:132] "Using iptables Proxier"
	I1018 17:50:16.497992       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 17:50:16.498301       1 server.go:527] "Version info" version="v1.34.1"
	I1018 17:50:16.498377       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 17:50:16.507101       1 config.go:200] "Starting service config controller"
	I1018 17:50:16.507206       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 17:50:16.507258       1 config.go:106] "Starting endpoint slice config controller"
	I1018 17:50:16.507322       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 17:50:16.507360       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 17:50:16.507388       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 17:50:16.510070       1 config.go:309] "Starting node config controller"
	I1018 17:50:16.510095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 17:50:16.510103       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 17:50:16.607760       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 17:50:16.607802       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 17:50:16.607844       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fb83e2f9880f48e77ccba9ff1a0240a5eacc8c5f0b7758c70e7c19289ba8795a] <==
	E1018 17:51:49.799031       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dl6h7\": pod kube-proxy-dl6h7 is already assigned to node \"ha-181800-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dl6h7" node="ha-181800-m05"
	E1018 17:51:49.799084       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 67c84e61-5f4b-4055-badb-7e5e5a8d6d59(kube-system/kube-proxy-dl6h7) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-dl6h7"
	E1018 17:51:49.799106       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dl6h7\": pod kube-proxy-dl6h7 is already assigned to node \"ha-181800-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-dl6h7"
	I1018 17:51:49.804755       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-dl6h7" node="ha-181800-m05"
	E1018 17:51:49.847992       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-652ws\": pod kindnet-652ws is already assigned to node \"ha-181800-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-652ws" node="ha-181800-m05"
	E1018 17:51:49.848050       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 46966a03-2286-4bd7-84d6-3b294dde0b19(kube-system/kindnet-652ws) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-652ws"
	E1018 17:51:49.848070       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-652ws\": pod kindnet-652ws is already assigned to node \"ha-181800-m05\"" logger="UnhandledError" pod="kube-system/kindnet-652ws"
	I1018 17:51:49.855992       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-652ws" node="ha-181800-m05"
	E1018 17:51:49.900256       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5t7kh\": pod kube-proxy-5t7kh is already assigned to node \"ha-181800-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5t7kh" node="ha-181800-m05"
	E1018 17:51:49.900306       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 732edcc6-d4d3-4a0e-b760-33a4fa7eb2a5(kube-system/kube-proxy-5t7kh) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-5t7kh"
	E1018 17:51:49.900326       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5t7kh\": pod kube-proxy-5t7kh is already assigned to node \"ha-181800-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-5t7kh"
	I1018 17:51:49.916504       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5t7kh" node="ha-181800-m05"
	E1018 17:51:50.349815       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-j5wff\": pod kube-proxy-j5wff is already assigned to node \"ha-181800-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-j5wff" node="ha-181800-m05"
	E1018 17:51:50.349866       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod b9575532-0422-4f41-8630-cc21dd86b88d(kube-system/kube-proxy-j5wff) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-j5wff"
	E1018 17:51:50.349885       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-j5wff\": pod kube-proxy-j5wff is already assigned to node \"ha-181800-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-j5wff"
	E1018 17:51:50.350097       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-n8km8\": pod kindnet-n8km8 is already assigned to node \"ha-181800-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-n8km8" node="ha-181800-m05"
	E1018 17:51:50.350120       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 3423ff09-8c27-4f2f-a971-f5e03ff5f1f3(kube-system/kindnet-n8km8) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-n8km8"
	I1018 17:51:50.354530       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-j5wff" node="ha-181800-m05"
	E1018 17:51:50.356637       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-n8km8\": pod kindnet-n8km8 is already assigned to node \"ha-181800-m05\"" logger="UnhandledError" pod="kube-system/kindnet-n8km8"
	I1018 17:51:50.356682       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-n8km8" node="ha-181800-m05"
	E1018 17:51:59.811705       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-wg66d\": pod kube-proxy-wg66d is already assigned to node \"ha-181800-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-wg66d" node="ha-181800-m05"
	E1018 17:51:59.811780       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-wg66d\": pod kube-proxy-wg66d is already assigned to node \"ha-181800-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-wg66d"
	I1018 17:51:59.811805       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-wg66d" node="ha-181800-m05"
	E1018 17:51:59.953333       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7xkff\": pod kube-proxy-7xkff is already assigned to node \"ha-181800-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7xkff" node="ha-181800-m05"
	E1018 17:51:59.953417       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7xkff\": pod kube-proxy-7xkff is already assigned to node \"ha-181800-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-7xkff"
	
	
	==> kubelet <==
	Oct 18 17:50:12 ha-181800 kubelet[798]: I1018 17:50:12.842479     798 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: E1018 17:50:12.856112     798 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-181800\" already exists" pod="kube-system/kube-controller-manager-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: I1018 17:50:12.856349     798 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: E1018 17:50:12.867959     798 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-181800\" already exists" pod="kube-system/kube-scheduler-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: I1018 17:50:12.868003     798 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: E1018 17:50:12.881408     798 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-vip-ha-181800\" already exists" pod="kube-system/kube-vip-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: I1018 17:50:12.881451     798 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: E1018 17:50:12.896352     798 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-181800\" already exists" pod="kube-system/etcd-ha-181800"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.091654     798 apiserver.go:52] "Watching apiserver"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.099077     798 scope.go:117] "RemoveContainer" containerID="bd6f9d7be603729a0a5200b910dc4c63002c84e58b83cb98debb890cf0bf202d"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.216894     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5edfc356-9d49-4895-b36a-06c2bd39155a-xtables-lock\") pod \"kindnet-72mvm\" (UID: \"5edfc356-9d49-4895-b36a-06c2bd39155a\") " pod="kube-system/kindnet-72mvm"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.217054     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15b89226-91ae-478f-acfe-7841776b1377-xtables-lock\") pod \"kube-proxy-stgvm\" (UID: \"15b89226-91ae-478f-acfe-7841776b1377\") " pod="kube-system/kube-proxy-stgvm"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.217077     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15b89226-91ae-478f-acfe-7841776b1377-lib-modules\") pod \"kube-proxy-stgvm\" (UID: \"15b89226-91ae-478f-acfe-7841776b1377\") " pod="kube-system/kube-proxy-stgvm"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.217093     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3c6521cd-8e1b-46aa-96a3-39e475e1426c-tmp\") pod \"storage-provisioner\" (UID: \"3c6521cd-8e1b-46aa-96a3-39e475e1426c\") " pod="kube-system/storage-provisioner"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.217110     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5edfc356-9d49-4895-b36a-06c2bd39155a-cni-cfg\") pod \"kindnet-72mvm\" (UID: \"5edfc356-9d49-4895-b36a-06c2bd39155a\") " pod="kube-system/kindnet-72mvm"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.217127     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5edfc356-9d49-4895-b36a-06c2bd39155a-lib-modules\") pod \"kindnet-72mvm\" (UID: \"5edfc356-9d49-4895-b36a-06c2bd39155a\") " pod="kube-system/kindnet-72mvm"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.222063     798 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.266801     798 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 17:50:13 ha-181800 kubelet[798]: W1018 17:50:13.559633     798 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/crio-c1b08873679284c397e63dc0b5e86a2778290edfaa47a2d3af86e787870c2624 WatchSource:0}: Error finding container c1b08873679284c397e63dc0b5e86a2778290edfaa47a2d3af86e787870c2624: Status 404 returned error can't find the container with id c1b08873679284c397e63dc0b5e86a2778290edfaa47a2d3af86e787870c2624
	Oct 18 17:50:13 ha-181800 kubelet[798]: W1018 17:50:13.569533     798 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/crio-0e97ce88bd2d3a36101a0a9930710ba30f34091e61ed0ed0249bd68b5d0f6fa7 WatchSource:0}: Error finding container 0e97ce88bd2d3a36101a0a9930710ba30f34091e61ed0ed0249bd68b5d0f6fa7: Status 404 returned error can't find the container with id 0e97ce88bd2d3a36101a0a9930710ba30f34091e61ed0ed0249bd68b5d0f6fa7
	Oct 18 17:50:13 ha-181800 kubelet[798]: W1018 17:50:13.789592     798 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/crio-2d6e6e05d930c610e9ac4942479166d3061f0b37055dbc9645478f2923f1ff53 WatchSource:0}: Error finding container 2d6e6e05d930c610e9ac4942479166d3061f0b37055dbc9645478f2923f1ff53: Status 404 returned error can't find the container with id 2d6e6e05d930c610e9ac4942479166d3061f0b37055dbc9645478f2923f1ff53
	Oct 18 17:50:17 ha-181800 kubelet[798]: E1018 17:50:17.091585     798 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/351deab77f22682d337e98537451625e6f5def60ef97378fe2ea489cd9cb173d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/351deab77f22682d337e98537451625e6f5def60ef97378fe2ea489cd9cb173d/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-181800_9656c3d6ff12279b641632c7e3275a8a/kube-controller-manager/6.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-181800_9656c3d6ff12279b641632c7e3275a8a/kube-controller-manager/6.log: no such file or directory
	Oct 18 17:50:17 ha-181800 kubelet[798]: E1018 17:50:17.097904     798 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3a8ceae8950ea9bca2bf6a05f4cb7633f55f4458c755f32741110642edbfd7ba/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3a8ceae8950ea9bca2bf6a05f4cb7633f55f4458c755f32741110642edbfd7ba/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-apiserver-ha-181800_f173b0166ea7317b529b58e20ef8d65f/kube-apiserver/6.log" to get inode usage: stat /var/log/pods/kube-system_kube-apiserver-ha-181800_f173b0166ea7317b529b58e20ef8d65f/kube-apiserver/6.log: no such file or directory
	Oct 18 17:50:17 ha-181800 kubelet[798]: E1018 17:50:17.148404     798 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/crio/crio-dad8e190116effc9294125133d608015a4f2ec86c95f308f26d5e4d771de4985\": RecentStats: unable to find data in memory cache]"
	Oct 18 17:50:45 ha-181800 kubelet[798]: I1018 17:50:45.570659     798 scope.go:117] "RemoveContainer" containerID="f2f15c809753a0cd811b332e6f6a8f9b5be888da593a2286ff085903e5ec3a12"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-181800 -n ha-181800
helpers_test.go:269: (dbg) Run:  kubectl --context ha-181800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (94.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.578299288s)
ha_test.go:305: expected profile "ha-181800" in json of 'profile list' to include 4 nodes but have 5 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-181800\",\"Status\":\"HAppy\",\"Config\":{\"Name\":\"ha-181800\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfssh
ares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-181800\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"I
P\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.49.4\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.168.49.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong
\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountM
Size\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-181800
helpers_test.go:243: (dbg) docker inspect ha-181800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2",
	        "Created": "2025-10-18T17:32:56.632116312Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 69617,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T17:48:09.683613005Z",
	            "FinishedAt": "2025-10-18T17:48:08.862033359Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/hostname",
	        "HostsPath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/hosts",
	        "LogPath": "/var/lib/docker/containers/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2-json.log",
	        "Name": "/ha-181800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-181800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-181800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2",
	                "LowerDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ee9cac217f1df57c366a054bdb2c2082365ef42fdd9cc6be8e55acf85cb35b8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-181800",
	                "Source": "/var/lib/docker/volumes/ha-181800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-181800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-181800",
	                "name.minikube.sigs.k8s.io": "ha-181800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4110ab73f7f9137e0eb013438b540b426c3fa9fedc1bed76ec7ffcc4fc35499f",
	            "SandboxKey": "/var/run/docker/netns/4110ab73f7f9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32818"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32819"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32822"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32820"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32821"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-181800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:81:2f:47:7d:4c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "903568cdf824d38f52cb9a58c116a852c83eb599cf8cc87e25ba21b593e45142",
	                    "EndpointID": "9a2af9d91b868a8642ef1db81d818bc623c9c1134408c932f61ec269578e0c92",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-181800",
	                        "5743bf3218eb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-181800 -n ha-181800
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-181800 logs -n 25: (1.809645865s)
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-181800 ssh -n ha-181800-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test_ha-181800-m03_ha-181800-m04.txt                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp testdata/cp-test.txt ha-181800-m04:/home/docker/cp-test.txt                                                             │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1463328482/001/cp-test_ha-181800-m04.txt │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt ha-181800:/home/docker/cp-test_ha-181800-m04_ha-181800.txt                       │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800 sudo cat /home/docker/cp-test_ha-181800-m04_ha-181800.txt                                                 │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt ha-181800-m02:/home/docker/cp-test_ha-181800-m04_ha-181800-m02.txt               │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m02 sudo cat /home/docker/cp-test_ha-181800-m04_ha-181800-m02.txt                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ cp      │ ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt ha-181800-m03:/home/docker/cp-test_ha-181800-m04_ha-181800-m03.txt               │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ ssh     │ ha-181800 ssh -n ha-181800-m03 sudo cat /home/docker/cp-test_ha-181800-m04_ha-181800-m03.txt                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ node    │ ha-181800 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:37 UTC │
	│ node    │ ha-181800 node start m02 --alsologtostderr -v 5                                                                                      │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:37 UTC │ 18 Oct 25 17:39 UTC │
	│ node    │ ha-181800 node list --alsologtostderr -v 5                                                                                           │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:39 UTC │                     │
	│ stop    │ ha-181800 stop --alsologtostderr -v 5                                                                                                │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:39 UTC │ 18 Oct 25 17:39 UTC │
	│ start   │ ha-181800 start --wait true --alsologtostderr -v 5                                                                                   │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:39 UTC │                     │
	│ node    │ ha-181800 node list --alsologtostderr -v 5                                                                                           │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:47 UTC │                     │
	│ node    │ ha-181800 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:47 UTC │                     │
	│ stop    │ ha-181800 stop --alsologtostderr -v 5                                                                                                │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:47 UTC │ 18 Oct 25 17:48 UTC │
	│ start   │ ha-181800 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:48 UTC │ 18 Oct 25 17:51 UTC │
	│ node    │ ha-181800 node add --control-plane --alsologtostderr -v 5                                                                            │ ha-181800 │ jenkins │ v1.37.0 │ 18 Oct 25 17:51 UTC │ 18 Oct 25 17:52 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 17:48:09
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 17:48:09.416034   69488 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:48:09.416413   69488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:48:09.416429   69488 out.go:374] Setting ErrFile to fd 2...
	I1018 17:48:09.416435   69488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:48:09.416751   69488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:48:09.417210   69488 out.go:368] Setting JSON to false
	I1018 17:48:09.418048   69488 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5439,"bootTime":1760804251,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 17:48:09.418116   69488 start.go:141] virtualization:  
	I1018 17:48:09.421406   69488 out.go:179] * [ha-181800] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 17:48:09.425201   69488 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 17:48:09.425270   69488 notify.go:220] Checking for updates...
	I1018 17:48:09.431395   69488 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 17:48:09.434249   69488 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:48:09.437177   69488 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 17:48:09.439990   69488 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 17:48:09.442873   69488 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 17:48:09.446186   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:48:09.446753   69488 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 17:48:09.469689   69488 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 17:48:09.469810   69488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:48:09.525756   69488 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 17:48:09.516473467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:48:09.525901   69488 docker.go:318] overlay module found
	I1018 17:48:09.529121   69488 out.go:179] * Using the docker driver based on existing profile
	I1018 17:48:09.532020   69488 start.go:305] selected driver: docker
	I1018 17:48:09.532065   69488 start.go:925] validating driver "docker" against &{Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:48:09.532200   69488 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 17:48:09.532300   69488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:48:09.595274   69488 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 17:48:09.586260967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:48:09.595672   69488 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:48:09.595711   69488 cni.go:84] Creating CNI manager for ""
	I1018 17:48:09.595769   69488 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1018 17:48:09.595821   69488 start.go:349] cluster config:
	{Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:48:09.600762   69488 out.go:179] * Starting "ha-181800" primary control-plane node in "ha-181800" cluster
	I1018 17:48:09.603624   69488 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:48:09.606573   69488 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:48:09.609415   69488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:48:09.609455   69488 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:48:09.609472   69488 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 17:48:09.609485   69488 cache.go:58] Caching tarball of preloaded images
	I1018 17:48:09.609580   69488 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:48:09.609590   69488 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:48:09.609731   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:48:09.629715   69488 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:48:09.629738   69488 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:48:09.629751   69488 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:48:09.629773   69488 start.go:360] acquireMachinesLock for ha-181800: {Name:mk3f5dfba2ab7d01f94f924dfcc5edab5f076901 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:48:09.629829   69488 start.go:364] duration metric: took 36.414µs to acquireMachinesLock for "ha-181800"
	I1018 17:48:09.629854   69488 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:48:09.629859   69488 fix.go:54] fixHost starting: 
	I1018 17:48:09.630111   69488 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:48:09.646601   69488 fix.go:112] recreateIfNeeded on ha-181800: state=Stopped err=<nil>
	W1018 17:48:09.646633   69488 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:48:09.649905   69488 out.go:252] * Restarting existing docker container for "ha-181800" ...
	I1018 17:48:09.649988   69488 cli_runner.go:164] Run: docker start ha-181800
	I1018 17:48:09.903186   69488 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:48:09.925021   69488 kic.go:430] container "ha-181800" state is running.
	I1018 17:48:09.925620   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:48:09.948773   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:48:09.949327   69488 machine.go:93] provisionDockerMachine start ...
	I1018 17:48:09.949403   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:09.972918   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:09.973247   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1018 17:48:09.973265   69488 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:48:09.973813   69488 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 17:48:13.124675   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800
	
	I1018 17:48:13.124706   69488 ubuntu.go:182] provisioning hostname "ha-181800"
	I1018 17:48:13.124768   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:13.142493   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:13.142802   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1018 17:48:13.142819   69488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181800 && echo "ha-181800" | sudo tee /etc/hostname
	I1018 17:48:13.298978   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800
	
	I1018 17:48:13.299071   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:13.318549   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:13.318864   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1018 17:48:13.318885   69488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181800/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:48:13.464891   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:48:13.464913   69488 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:48:13.464930   69488 ubuntu.go:190] setting up certificates
	I1018 17:48:13.464957   69488 provision.go:84] configureAuth start
	I1018 17:48:13.465015   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:48:13.482208   69488 provision.go:143] copyHostCerts
	I1018 17:48:13.482250   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:48:13.482283   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:48:13.482302   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:48:13.482377   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:48:13.482463   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:48:13.482486   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:48:13.482493   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:48:13.482520   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:48:13.482562   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:48:13.482582   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:48:13.482588   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:48:13.482612   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:48:13.482660   69488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.ha-181800 san=[127.0.0.1 192.168.49.2 ha-181800 localhost minikube]
	I1018 17:48:14.423915   69488 provision.go:177] copyRemoteCerts
	I1018 17:48:14.423988   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:48:14.424038   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:14.441172   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:48:14.544666   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 17:48:14.544730   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1018 17:48:14.562271   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 17:48:14.562355   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:48:14.579774   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 17:48:14.579882   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:48:14.597738   69488 provision.go:87] duration metric: took 1.132758135s to configureAuth
	I1018 17:48:14.597766   69488 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:48:14.598014   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:48:14.598118   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:14.616530   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:14.616832   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1018 17:48:14.616852   69488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:48:14.938623   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:48:14.938694   69488 machine.go:96] duration metric: took 4.989343324s to provisionDockerMachine
	I1018 17:48:14.938719   69488 start.go:293] postStartSetup for "ha-181800" (driver="docker")
	I1018 17:48:14.938743   69488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:48:14.938827   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:48:14.938907   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:14.961006   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:48:15.069145   69488 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:48:15.072788   69488 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:48:15.072820   69488 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:48:15.072832   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:48:15.072889   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:48:15.073008   69488 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:48:15.073020   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 17:48:15.073124   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 17:48:15.080710   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:48:15.098679   69488 start.go:296] duration metric: took 159.932309ms for postStartSetup
	I1018 17:48:15.098839   69488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:48:15.098888   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:15.116684   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:48:15.217789   69488 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:48:15.222543   69488 fix.go:56] duration metric: took 5.59267659s for fixHost
	I1018 17:48:15.222570   69488 start.go:83] releasing machines lock for "ha-181800", held for 5.59272729s
	I1018 17:48:15.222640   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:48:15.239602   69488 ssh_runner.go:195] Run: cat /version.json
	I1018 17:48:15.239657   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:15.239935   69488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:48:15.239989   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:48:15.258489   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:48:15.259704   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:48:15.360628   69488 ssh_runner.go:195] Run: systemctl --version
	I1018 17:48:15.453252   69488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:48:15.490459   69488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:48:15.494882   69488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:48:15.494987   69488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:48:15.502526   69488 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:48:15.502555   69488 start.go:495] detecting cgroup driver to use...
	I1018 17:48:15.502585   69488 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:48:15.502634   69488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:48:15.518083   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:48:15.531171   69488 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:48:15.531254   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:48:15.547013   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:48:15.559697   69488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:48:15.666369   69488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:48:15.774518   69488 docker.go:234] disabling docker service ...
	I1018 17:48:15.774580   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:48:15.789730   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:48:15.802288   69488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:48:15.919408   69488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:48:16.029842   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:48:16.043317   69488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:48:16.059310   69488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:48:16.059453   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.069280   69488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:48:16.069350   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.078814   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.087874   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.097837   69488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:48:16.106890   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.115708   69488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.123935   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:48:16.132770   69488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:48:16.140320   69488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:48:16.147761   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:48:16.260916   69488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:48:16.404712   69488 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:48:16.404830   69488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:48:16.408509   69488 start.go:563] Will wait 60s for crictl version
	I1018 17:48:16.408623   69488 ssh_runner.go:195] Run: which crictl
	I1018 17:48:16.411907   69488 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:48:16.435137   69488 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:48:16.435295   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:48:16.466039   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:48:16.501936   69488 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:48:16.504878   69488 cli_runner.go:164] Run: docker network inspect ha-181800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:48:16.520780   69488 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:48:16.524665   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:48:16.534613   69488 kubeadm.go:883] updating cluster {Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 17:48:16.534762   69488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:48:16.534819   69488 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 17:48:16.574503   69488 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 17:48:16.574531   69488 crio.go:433] Images already preloaded, skipping extraction
	I1018 17:48:16.574590   69488 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 17:48:16.600203   69488 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 17:48:16.600227   69488 cache_images.go:85] Images are preloaded, skipping loading
	I1018 17:48:16.600237   69488 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 17:48:16.600342   69488 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-181800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:48:16.600422   69488 ssh_runner.go:195] Run: crio config
	I1018 17:48:16.665910   69488 cni.go:84] Creating CNI manager for ""
	I1018 17:48:16.665937   69488 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1018 17:48:16.665961   69488 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 17:48:16.665986   69488 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-181800 NodeName:ha-181800 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 17:48:16.666112   69488 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-181800"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 17:48:16.666132   69488 kube-vip.go:115] generating kube-vip config ...
	I1018 17:48:16.666191   69488 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 17:48:16.678158   69488 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:48:16.678333   69488 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 17:48:16.678419   69488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:48:16.686215   69488 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:48:16.686327   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1018 17:48:16.693873   69488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1018 17:48:16.706512   69488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:48:16.719311   69488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1018 17:48:16.731738   69488 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 17:48:16.744107   69488 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 17:48:16.747479   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:48:16.756979   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:48:16.873983   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:48:16.890078   69488 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800 for IP: 192.168.49.2
	I1018 17:48:16.890141   69488 certs.go:195] generating shared ca certs ...
	I1018 17:48:16.890170   69488 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:48:16.890342   69488 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:48:16.890408   69488 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:48:16.890429   69488 certs.go:257] generating profile certs ...
	I1018 17:48:16.890571   69488 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key
	I1018 17:48:16.890683   69488 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.46a58690
	I1018 17:48:16.890745   69488 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key
	I1018 17:48:16.890767   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 17:48:16.890806   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 17:48:16.890839   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 17:48:16.890866   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 17:48:16.890905   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 17:48:16.890937   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 17:48:16.890965   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 17:48:16.891003   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 17:48:16.891075   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:48:16.891135   69488 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:48:16.891163   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:48:16.891206   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:48:16.891265   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:48:16.891308   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:48:16.891389   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:48:16.891447   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 17:48:16.891488   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:48:16.891521   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 17:48:16.892071   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:48:16.910107   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:48:16.927560   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:48:16.944252   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:48:16.961007   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 17:48:16.981715   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 17:48:17.002129   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 17:48:17.028151   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 17:48:17.050134   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:48:17.076842   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:48:17.102342   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:48:17.120809   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 17:48:17.135197   69488 ssh_runner.go:195] Run: openssl version
	I1018 17:48:17.141316   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:48:17.149779   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:48:17.156384   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:48:17.156498   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:48:17.198104   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:48:17.206025   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:48:17.214061   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:48:17.217558   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:48:17.217636   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:48:17.259653   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:48:17.267330   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:48:17.275410   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:48:17.278912   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:48:17.279004   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:48:17.319663   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:48:17.327893   69488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:48:17.331787   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 17:48:17.372669   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 17:48:17.413640   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 17:48:17.455669   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 17:48:17.503310   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 17:48:17.553128   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 17:48:17.610923   69488 kubeadm.go:400] StartCluster: {Name:ha-181800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:48:17.611069   69488 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 17:48:17.611141   69488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 17:48:17.693793   69488 cri.go:89] found id: "42139c5070f82bb1e1dd7564661f58a74b134ab219b910335d022b2235e65fc0"
	I1018 17:48:17.693817   69488 cri.go:89] found id: "405d4b2711179ef2be985a5942049e2e36688b992d1fd9f96f2e882cfa95bfd5"
	I1018 17:48:17.693822   69488 cri.go:89] found id: "fb83e2f9880f48e77ccba9ff1a0240a5eacc8c5f0b7758c70e7c19289ba8795a"
	I1018 17:48:17.693826   69488 cri.go:89] found id: ""
	I1018 17:48:17.693886   69488 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 17:48:17.727781   69488 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:48:17Z" level=error msg="open /run/runc: no such file or directory"
	I1018 17:48:17.727885   69488 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 17:48:17.752985   69488 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 17:48:17.753011   69488 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 17:48:17.753077   69488 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 17:48:17.766549   69488 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:48:17.766998   69488 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-181800" does not appear in /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:48:17.767116   69488 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-2509/kubeconfig needs updating (will repair): [kubeconfig missing "ha-181800" cluster setting kubeconfig missing "ha-181800" context setting]
	I1018 17:48:17.767408   69488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:48:17.768000   69488 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 17:48:17.768691   69488 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 17:48:17.768713   69488 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 17:48:17.768754   69488 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1018 17:48:17.768718   69488 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 17:48:17.768800   69488 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 17:48:17.768817   69488 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 17:48:17.769158   69488 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 17:48:17.777893   69488 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1018 17:48:17.777928   69488 kubeadm.go:601] duration metric: took 24.910349ms to restartPrimaryControlPlane
	I1018 17:48:17.777937   69488 kubeadm.go:402] duration metric: took 167.022952ms to StartCluster
	I1018 17:48:17.777952   69488 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:48:17.778019   69488 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:48:17.778655   69488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:48:17.778876   69488 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:48:17.778908   69488 start.go:241] waiting for startup goroutines ...
	I1018 17:48:17.778916   69488 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 17:48:17.779460   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:48:17.784791   69488 out.go:179] * Enabled addons: 
	I1018 17:48:17.787780   69488 addons.go:514] duration metric: took 8.843165ms for enable addons: enabled=[]
	I1018 17:48:17.787841   69488 start.go:246] waiting for cluster config update ...
	I1018 17:48:17.787851   69488 start.go:255] writing updated cluster config ...
	I1018 17:48:17.791154   69488 out.go:203] 
	I1018 17:48:17.794423   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:48:17.794545   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:48:17.797951   69488 out.go:179] * Starting "ha-181800-m02" control-plane node in "ha-181800" cluster
	I1018 17:48:17.800906   69488 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:48:17.803852   69488 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:48:17.806813   69488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:48:17.806848   69488 cache.go:58] Caching tarball of preloaded images
	I1018 17:48:17.806951   69488 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:48:17.806966   69488 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:48:17.807089   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:48:17.807301   69488 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:48:17.833480   69488 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:48:17.833505   69488 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:48:17.833520   69488 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:48:17.833542   69488 start.go:360] acquireMachinesLock for ha-181800-m02: {Name:mk36a488c0fbfc8557c6ba291b969aad85b45635 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:48:17.833604   69488 start.go:364] duration metric: took 42.142µs to acquireMachinesLock for "ha-181800-m02"
	I1018 17:48:17.833629   69488 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:48:17.833638   69488 fix.go:54] fixHost starting: m02
	I1018 17:48:17.833888   69488 cli_runner.go:164] Run: docker container inspect ha-181800-m02 --format={{.State.Status}}
	I1018 17:48:17.853969   69488 fix.go:112] recreateIfNeeded on ha-181800-m02: state=Stopped err=<nil>
	W1018 17:48:17.853999   69488 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:48:17.859511   69488 out.go:252] * Restarting existing docker container for "ha-181800-m02" ...
	I1018 17:48:17.859599   69488 cli_runner.go:164] Run: docker start ha-181800-m02
	I1018 17:48:18.199583   69488 cli_runner.go:164] Run: docker container inspect ha-181800-m02 --format={{.State.Status}}
	I1018 17:48:18.226549   69488 kic.go:430] container "ha-181800-m02" state is running.
	I1018 17:48:18.226893   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:48:18.262995   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:48:18.263226   69488 machine.go:93] provisionDockerMachine start ...
	I1018 17:48:18.263282   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:48:18.293143   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:18.293452   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1018 17:48:18.293466   69488 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:48:18.294119   69488 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 17:48:21.560416   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m02
	
	I1018 17:48:21.560480   69488 ubuntu.go:182] provisioning hostname "ha-181800-m02"
	I1018 17:48:21.560583   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:48:21.588400   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:21.588705   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1018 17:48:21.588717   69488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181800-m02 && echo "ha-181800-m02" | sudo tee /etc/hostname
	I1018 17:48:21.918738   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m02
	
	I1018 17:48:21.918888   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:48:21.950544   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:21.950842   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1018 17:48:21.950857   69488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181800-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181800-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181800-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:48:22.217685   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:48:22.217712   69488 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:48:22.217727   69488 ubuntu.go:190] setting up certificates
	I1018 17:48:22.217741   69488 provision.go:84] configureAuth start
	I1018 17:48:22.217804   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:48:22.255770   69488 provision.go:143] copyHostCerts
	I1018 17:48:22.255810   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:48:22.255843   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:48:22.255850   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:48:22.255928   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:48:22.255999   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:48:22.256017   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:48:22.256021   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:48:22.256045   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:48:22.256080   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:48:22.256096   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:48:22.256100   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:48:22.256121   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:48:22.256204   69488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.ha-181800-m02 san=[127.0.0.1 192.168.49.3 ha-181800-m02 localhost minikube]
	I1018 17:48:22.398509   69488 provision.go:177] copyRemoteCerts
	I1018 17:48:22.398627   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:48:22.398703   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:48:22.417071   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:48:22.539435   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 17:48:22.539497   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:48:22.590740   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 17:48:22.590799   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:48:22.640636   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 17:48:22.640749   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 17:48:22.682470   69488 provision.go:87] duration metric: took 464.715425ms to configureAuth
	I1018 17:48:22.682541   69488 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:48:22.682832   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:48:22.682993   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:48:22.710684   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:48:22.710986   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1018 17:48:22.711001   69488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:49:53.355970   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:49:53.355994   69488 machine.go:96] duration metric: took 1m35.092758423s to provisionDockerMachine
	I1018 17:49:53.356005   69488 start.go:293] postStartSetup for "ha-181800-m02" (driver="docker")
	I1018 17:49:53.356016   69488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:49:53.356073   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:49:53.356118   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:49:53.374240   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:49:53.476619   69488 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:49:53.479822   69488 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:49:53.479849   69488 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:49:53.479860   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:49:53.479932   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:49:53.480042   69488 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:49:53.480053   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 17:49:53.480150   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 17:49:53.487506   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:49:53.503781   69488 start.go:296] duration metric: took 147.726679ms for postStartSetup
	I1018 17:49:53.503861   69488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:49:53.503907   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:49:53.521965   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:49:53.622051   69488 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:49:53.627407   69488 fix.go:56] duration metric: took 1m35.793761422s for fixHost
	I1018 17:49:53.627431   69488 start.go:83] releasing machines lock for "ha-181800-m02", held for 1m35.793813517s
	I1018 17:49:53.627503   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m02
	I1018 17:49:53.647527   69488 out.go:179] * Found network options:
	I1018 17:49:53.650482   69488 out.go:179]   - NO_PROXY=192.168.49.2
	W1018 17:49:53.653336   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:49:53.653390   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	I1018 17:49:53.653464   69488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:49:53.653510   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:49:53.653793   69488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:49:53.653863   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m02
	I1018 17:49:53.671905   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:49:53.683540   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m02/id_rsa Username:docker}
	I1018 17:49:53.861179   69488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:49:53.865770   69488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:49:53.865856   69488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:49:53.873670   69488 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:49:53.873694   69488 start.go:495] detecting cgroup driver to use...
	I1018 17:49:53.873745   69488 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:49:53.873813   69488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:49:53.888526   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:49:53.901761   69488 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:49:53.901850   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:49:53.917699   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:49:53.931789   69488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:49:54.071500   69488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:49:54.203057   69488 docker.go:234] disabling docker service ...
	I1018 17:49:54.203122   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:49:54.218563   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:49:54.232433   69488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:49:54.361440   69488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:49:54.490330   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:49:54.503221   69488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:49:54.517805   69488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:49:54.517883   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.527169   69488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:49:54.527231   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.536041   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.544703   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.553243   69488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:49:54.562614   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.571510   69488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.579788   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:49:54.588456   69488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:49:54.595820   69488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:49:54.602817   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:49:54.728528   69488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:49:58.621131   69488 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.89256859s)
	I1018 17:49:58.626115   69488 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:49:58.626223   69488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:49:58.631167   69488 start.go:563] Will wait 60s for crictl version
	I1018 17:49:58.631232   69488 ssh_runner.go:195] Run: which crictl
	I1018 17:49:58.639191   69488 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:49:58.672795   69488 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:49:58.672878   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:49:58.723386   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:49:58.777499   69488 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:49:58.780571   69488 out.go:179]   - env NO_PROXY=192.168.49.2
	I1018 17:49:58.783632   69488 cli_runner.go:164] Run: docker network inspect ha-181800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:49:58.815077   69488 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:49:58.819329   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:49:58.831215   69488 mustload.go:65] Loading cluster: ha-181800
	I1018 17:49:58.831449   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:49:58.831716   69488 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:49:58.862708   69488 host.go:66] Checking if "ha-181800" exists ...
	I1018 17:49:58.863022   69488 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800 for IP: 192.168.49.3
	I1018 17:49:58.863040   69488 certs.go:195] generating shared ca certs ...
	I1018 17:49:58.863058   69488 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:49:58.863172   69488 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:49:58.863215   69488 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:49:58.863222   69488 certs.go:257] generating profile certs ...
	I1018 17:49:58.863290   69488 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key
	I1018 17:49:58.863337   69488 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.887e0b27
	I1018 17:49:58.863381   69488 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key
	I1018 17:49:58.863390   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 17:49:58.863402   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 17:49:58.863414   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 17:49:58.863425   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 17:49:58.863435   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 17:49:58.863448   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 17:49:58.863470   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 17:49:58.863481   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 17:49:58.863531   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:49:58.863559   69488 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:49:58.863567   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:49:58.863589   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:49:58.863615   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:49:58.863635   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:49:58.863676   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:49:58.863709   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 17:49:58.863731   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:49:58.863743   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 17:49:58.863871   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:49:58.882935   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:49:58.981280   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1018 17:49:58.984884   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1018 17:49:58.992968   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1018 17:49:58.996547   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1018 17:49:59.005742   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1018 17:49:59.009863   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1018 17:49:59.018651   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1018 17:49:59.022300   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1018 17:49:59.030647   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1018 17:49:59.034128   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1018 17:49:59.042303   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1018 17:49:59.045696   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1018 17:49:59.054134   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:49:59.072336   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:49:59.090250   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:49:59.107793   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:49:59.124795   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 17:49:59.150615   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 17:49:59.169033   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 17:49:59.186177   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 17:49:59.203120   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:49:59.220145   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:49:59.237999   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:49:59.257279   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1018 17:49:59.269634   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1018 17:49:59.282735   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1018 17:49:59.295341   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1018 17:49:59.308329   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1018 17:49:59.320556   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1018 17:49:59.332714   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1018 17:49:59.348902   69488 ssh_runner.go:195] Run: openssl version
	I1018 17:49:59.356738   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:49:59.365172   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:49:59.368839   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:49:59.368976   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:49:59.414784   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:49:59.422423   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:49:59.430191   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:49:59.433619   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:49:59.433727   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:49:59.474255   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:49:59.481911   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:49:59.490061   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:49:59.493763   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:49:59.493835   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:49:59.534567   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:49:59.542475   69488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:49:59.546230   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 17:49:59.592499   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 17:49:59.635764   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 17:49:59.676750   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 17:49:59.719668   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 17:49:59.760653   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 17:49:59.801453   69488 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1018 17:49:59.801594   69488 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-181800-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:49:59.801625   69488 kube-vip.go:115] generating kube-vip config ...
	I1018 17:49:59.801676   69488 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 17:49:59.813138   69488 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:49:59.813221   69488 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 17:49:59.813313   69488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:49:59.820930   69488 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:49:59.821061   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1018 17:49:59.828485   69488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 17:49:59.840643   69488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:49:59.853675   69488 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 17:49:59.867836   69488 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 17:49:59.871456   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:49:59.881052   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:00.019627   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:50:00.063785   69488 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:50:00.065404   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:00.068131   69488 out.go:179] * Verifying Kubernetes components...
	I1018 17:50:00.071263   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:00.372789   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:50:00.393030   69488 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1018 17:50:00.393170   69488 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1018 17:50:00.393487   69488 node_ready.go:35] waiting up to 6m0s for node "ha-181800-m02" to be "Ready" ...
	W1018 17:50:02.394400   69488 node_ready.go:55] error getting node "ha-181800-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-181800-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1018 17:50:08.470080   69488 node_ready.go:57] node "ha-181800-m02" has "Ready":"Unknown" status (will retry)
	I1018 17:50:09.421305   69488 node_ready.go:49] node "ha-181800-m02" is "Ready"
	I1018 17:50:09.421384   69488 node_ready.go:38] duration metric: took 9.02787205s for node "ha-181800-m02" to be "Ready" ...
	I1018 17:50:09.421422   69488 api_server.go:52] waiting for apiserver process to appear ...
	I1018 17:50:09.421500   69488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:50:09.447456   69488 api_server.go:72] duration metric: took 9.383624261s to wait for apiserver process to appear ...
	I1018 17:50:09.447520   69488 api_server.go:88] waiting for apiserver healthz status ...
	I1018 17:50:09.447553   69488 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 17:50:09.466347   69488 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 17:50:09.466422   69488 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 17:50:09.947999   69488 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 17:50:09.958418   69488 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 17:50:09.958509   69488 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 17:50:10.447814   69488 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 17:50:10.462608   69488 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 17:50:10.463984   69488 api_server.go:141] control plane version: v1.34.1
	I1018 17:50:10.464041   69488 api_server.go:131] duration metric: took 1.016500993s to wait for apiserver health ...
	I1018 17:50:10.464067   69488 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 17:50:10.483197   69488 system_pods.go:59] 26 kube-system pods found
	I1018 17:50:10.483289   69488 system_pods.go:61] "coredns-66bc5c9577-f6v2w" [a1fbdf00-9636-43a5-b1ed-a98bcacb5537] Running
	I1018 17:50:10.483312   69488 system_pods.go:61] "coredns-66bc5c9577-p7nbg" [9d361193-5b45-400e-8161-804fc30e7515] Running
	I1018 17:50:10.483343   69488 system_pods.go:61] "etcd-ha-181800" [3aafeb42-d09a-4b84-9739-e25adc3a4135] Running
	I1018 17:50:10.483363   69488 system_pods.go:61] "etcd-ha-181800-m02" [194d8d52-b9b6-43ae-8c1f-01b965d3ae96] Running
	I1018 17:50:10.483380   69488 system_pods.go:61] "etcd-ha-181800-m03" [f52cd0ee-6f99-49ba-8c4f-218b8d166fe2] Running
	I1018 17:50:10.483399   69488 system_pods.go:61] "kindnet-72mvm" [5edfc356-9d49-4895-b36a-06c2bd39155a] Running
	I1018 17:50:10.483417   69488 system_pods.go:61] "kindnet-86s8z" [6559ac9e-c73d-4d49-a0e1-87d630e5bec8] Running
	I1018 17:50:10.483439   69488 system_pods.go:61] "kindnet-88bv7" [3b3b9715-1e6e-4046-adae-f372381e068a] Running
	I1018 17:50:10.483466   69488 system_pods.go:61] "kindnet-9qbbw" [d1a305ed-4a0e-4ccc-90e0-04577ad4e5c4] Running
	I1018 17:50:10.483486   69488 system_pods.go:61] "kube-apiserver-ha-181800" [4966738e-d055-404d-82ad-0d3f23ef0337] Running
	I1018 17:50:10.483506   69488 system_pods.go:61] "kube-apiserver-ha-181800-m02" [344fc499-0c04-4f86-a919-3c2da1e7a1e6] Running
	I1018 17:50:10.483524   69488 system_pods.go:61] "kube-apiserver-ha-181800-m03" [ce72f944-adc2-46a9-a83c-dc75936c3e9c] Running
	I1018 17:50:10.483543   69488 system_pods.go:61] "kube-controller-manager-ha-181800" [9a4be61b-4ecc-46da-86a1-472b6da720b9] Running
	I1018 17:50:10.483573   69488 system_pods.go:61] "kube-controller-manager-ha-181800-m02" [6a519ce2-92dc-4003-8f1a-6d818fea6da3] Running
	I1018 17:50:10.483593   69488 system_pods.go:61] "kube-controller-manager-ha-181800-m03" [9d247c9d-37a0-4880-8b0a-1134ebb963ab] Running
	I1018 17:50:10.483612   69488 system_pods.go:61] "kube-proxy-dpwpn" [dfabd129-fc36-4d16-ab0f-0b9ecc015712] Running
	I1018 17:50:10.483630   69488 system_pods.go:61] "kube-proxy-fj4ww" [40c5681f-ad11-4e21-a852-5601e2a9fa6e] Running
	I1018 17:50:10.483648   69488 system_pods.go:61] "kube-proxy-qsqmb" [9e100b31-50e5-4d86-a234-0d6277009e98] Running
	I1018 17:50:10.483673   69488 system_pods.go:61] "kube-proxy-stgvm" [15b89226-91ae-478f-acfe-7841776b1377] Running
	I1018 17:50:10.483697   69488 system_pods.go:61] "kube-scheduler-ha-181800" [f4699386-754c-4fa2-8556-174d872d6825] Running
	I1018 17:50:10.483716   69488 system_pods.go:61] "kube-scheduler-ha-181800-m02" [565d55c5-9541-4ef9-a036-3d9ff03f0fa9] Running
	I1018 17:50:10.483733   69488 system_pods.go:61] "kube-scheduler-ha-181800-m03" [4f8687e4-3dbc-4c98-97a4-ab703b016798] Running
	I1018 17:50:10.483751   69488 system_pods.go:61] "kube-vip-ha-181800" [a947f5a9-6257-4ff0-9f73-2d720974668b] Running
	I1018 17:50:10.483784   69488 system_pods.go:61] "kube-vip-ha-181800-m02" [21258022-efed-42fb-b206-89ffcd8d3820] Running
	I1018 17:50:10.483812   69488 system_pods.go:61] "kube-vip-ha-181800-m03" [0087f776-5d07-4c43-906d-c63afc2cc349] Running
	I1018 17:50:10.483830   69488 system_pods.go:61] "storage-provisioner" [3c6521cd-8e1b-46aa-96a3-39e475e1426c] Running
	I1018 17:50:10.483848   69488 system_pods.go:74] duration metric: took 19.763103ms to wait for pod list to return data ...
	I1018 17:50:10.483877   69488 default_sa.go:34] waiting for default service account to be created ...
	I1018 17:50:10.493513   69488 default_sa.go:45] found service account: "default"
	I1018 17:50:10.493594   69488 default_sa.go:55] duration metric: took 9.697323ms for default service account to be created ...
	I1018 17:50:10.493625   69488 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 17:50:10.501353   69488 system_pods.go:86] 26 kube-system pods found
	I1018 17:50:10.501452   69488 system_pods.go:89] "coredns-66bc5c9577-f6v2w" [a1fbdf00-9636-43a5-b1ed-a98bcacb5537] Running
	I1018 17:50:10.501476   69488 system_pods.go:89] "coredns-66bc5c9577-p7nbg" [9d361193-5b45-400e-8161-804fc30e7515] Running
	I1018 17:50:10.501494   69488 system_pods.go:89] "etcd-ha-181800" [3aafeb42-d09a-4b84-9739-e25adc3a4135] Running
	I1018 17:50:10.501514   69488 system_pods.go:89] "etcd-ha-181800-m02" [194d8d52-b9b6-43ae-8c1f-01b965d3ae96] Running
	I1018 17:50:10.501540   69488 system_pods.go:89] "etcd-ha-181800-m03" [f52cd0ee-6f99-49ba-8c4f-218b8d166fe2] Running
	I1018 17:50:10.501560   69488 system_pods.go:89] "kindnet-72mvm" [5edfc356-9d49-4895-b36a-06c2bd39155a] Running
	I1018 17:50:10.501578   69488 system_pods.go:89] "kindnet-86s8z" [6559ac9e-c73d-4d49-a0e1-87d630e5bec8] Running
	I1018 17:50:10.501595   69488 system_pods.go:89] "kindnet-88bv7" [3b3b9715-1e6e-4046-adae-f372381e068a] Running
	I1018 17:50:10.501612   69488 system_pods.go:89] "kindnet-9qbbw" [d1a305ed-4a0e-4ccc-90e0-04577ad4e5c4] Running
	I1018 17:50:10.501639   69488 system_pods.go:89] "kube-apiserver-ha-181800" [4966738e-d055-404d-82ad-0d3f23ef0337] Running
	I1018 17:50:10.501660   69488 system_pods.go:89] "kube-apiserver-ha-181800-m02" [344fc499-0c04-4f86-a919-3c2da1e7a1e6] Running
	I1018 17:50:10.501677   69488 system_pods.go:89] "kube-apiserver-ha-181800-m03" [ce72f944-adc2-46a9-a83c-dc75936c3e9c] Running
	I1018 17:50:10.501694   69488 system_pods.go:89] "kube-controller-manager-ha-181800" [9a4be61b-4ecc-46da-86a1-472b6da720b9] Running
	I1018 17:50:10.501711   69488 system_pods.go:89] "kube-controller-manager-ha-181800-m02" [6a519ce2-92dc-4003-8f1a-6d818fea6da3] Running
	I1018 17:50:10.501737   69488 system_pods.go:89] "kube-controller-manager-ha-181800-m03" [9d247c9d-37a0-4880-8b0a-1134ebb963ab] Running
	I1018 17:50:10.501756   69488 system_pods.go:89] "kube-proxy-dpwpn" [dfabd129-fc36-4d16-ab0f-0b9ecc015712] Running
	I1018 17:50:10.501776   69488 system_pods.go:89] "kube-proxy-fj4ww" [40c5681f-ad11-4e21-a852-5601e2a9fa6e] Running
	I1018 17:50:10.501793   69488 system_pods.go:89] "kube-proxy-qsqmb" [9e100b31-50e5-4d86-a234-0d6277009e98] Running
	I1018 17:50:10.501809   69488 system_pods.go:89] "kube-proxy-stgvm" [15b89226-91ae-478f-acfe-7841776b1377] Running
	I1018 17:50:10.501836   69488 system_pods.go:89] "kube-scheduler-ha-181800" [f4699386-754c-4fa2-8556-174d872d6825] Running
	I1018 17:50:10.501855   69488 system_pods.go:89] "kube-scheduler-ha-181800-m02" [565d55c5-9541-4ef9-a036-3d9ff03f0fa9] Running
	I1018 17:50:10.501872   69488 system_pods.go:89] "kube-scheduler-ha-181800-m03" [4f8687e4-3dbc-4c98-97a4-ab703b016798] Running
	I1018 17:50:10.501889   69488 system_pods.go:89] "kube-vip-ha-181800" [a947f5a9-6257-4ff0-9f73-2d720974668b] Running
	I1018 17:50:10.501906   69488 system_pods.go:89] "kube-vip-ha-181800-m02" [21258022-efed-42fb-b206-89ffcd8d3820] Running
	I1018 17:50:10.501923   69488 system_pods.go:89] "kube-vip-ha-181800-m03" [0087f776-5d07-4c43-906d-c63afc2cc349] Running
	I1018 17:50:10.501939   69488 system_pods.go:89] "storage-provisioner" [3c6521cd-8e1b-46aa-96a3-39e475e1426c] Running
	I1018 17:50:10.501958   69488 system_pods.go:126] duration metric: took 8.313403ms to wait for k8s-apps to be running ...
	I1018 17:50:10.501982   69488 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 17:50:10.502072   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 17:50:10.521995   69488 system_svc.go:56] duration metric: took 20.005468ms WaitForService to wait for kubelet
	I1018 17:50:10.522064   69488 kubeadm.go:586] duration metric: took 10.458238282s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:50:10.522097   69488 node_conditions.go:102] verifying NodePressure condition ...
	I1018 17:50:10.529801   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:10.529839   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:10.529851   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:10.529856   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:10.529860   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:10.529864   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:10.529868   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:10.529873   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:10.529878   69488 node_conditions.go:105] duration metric: took 7.761413ms to run NodePressure ...
	I1018 17:50:10.529893   69488 start.go:241] waiting for startup goroutines ...
	I1018 17:50:10.529919   69488 start.go:255] writing updated cluster config ...
	I1018 17:50:10.533578   69488 out.go:203] 
	I1018 17:50:10.536806   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:10.536948   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:50:10.540446   69488 out.go:179] * Starting "ha-181800-m03" control-plane node in "ha-181800" cluster
	I1018 17:50:10.544213   69488 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:50:10.547247   69488 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:50:10.550234   69488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:50:10.550276   69488 cache.go:58] Caching tarball of preloaded images
	I1018 17:50:10.550383   69488 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:50:10.550399   69488 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:50:10.550572   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:50:10.550792   69488 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:50:10.581920   69488 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:50:10.581944   69488 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:50:10.581957   69488 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:50:10.581981   69488 start.go:360] acquireMachinesLock for ha-181800-m03: {Name:mk3bd15228a4ef4b7c016e23b190ad29deb5e3c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:50:10.582039   69488 start.go:364] duration metric: took 38.023µs to acquireMachinesLock for "ha-181800-m03"
	I1018 17:50:10.582062   69488 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:50:10.582068   69488 fix.go:54] fixHost starting: m03
	I1018 17:50:10.582331   69488 cli_runner.go:164] Run: docker container inspect ha-181800-m03 --format={{.State.Status}}
	I1018 17:50:10.604865   69488 fix.go:112] recreateIfNeeded on ha-181800-m03: state=Stopped err=<nil>
	W1018 17:50:10.604890   69488 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:50:10.607957   69488 out.go:252] * Restarting existing docker container for "ha-181800-m03" ...
	I1018 17:50:10.608050   69488 cli_runner.go:164] Run: docker start ha-181800-m03
	I1018 17:50:10.899418   69488 cli_runner.go:164] Run: docker container inspect ha-181800-m03 --format={{.State.Status}}
	I1018 17:50:10.926262   69488 kic.go:430] container "ha-181800-m03" state is running.
	I1018 17:50:10.926628   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m03
	I1018 17:50:10.950821   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:50:10.951066   69488 machine.go:93] provisionDockerMachine start ...
	I1018 17:50:10.951120   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:10.976987   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:10.977281   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1018 17:50:10.977290   69488 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:50:10.978264   69488 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 17:50:14.380761   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m03
	
	I1018 17:50:14.380788   69488 ubuntu.go:182] provisioning hostname "ha-181800-m03"
	I1018 17:50:14.380865   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:14.409115   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:14.409426   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1018 17:50:14.409441   69488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181800-m03 && echo "ha-181800-m03" | sudo tee /etc/hostname
	I1018 17:50:14.717264   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m03
	
	I1018 17:50:14.717353   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:14.739028   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:14.739335   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1018 17:50:14.739352   69488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181800-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181800-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181800-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:50:14.965850   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:50:14.965903   69488 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:50:14.965931   69488 ubuntu.go:190] setting up certificates
	I1018 17:50:14.965940   69488 provision.go:84] configureAuth start
	I1018 17:50:14.966014   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m03
	I1018 17:50:15.001400   69488 provision.go:143] copyHostCerts
	I1018 17:50:15.001447   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:50:15.001479   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:50:15.001492   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:50:15.001591   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:50:15.001685   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:50:15.001709   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:50:15.001717   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:50:15.001745   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:50:15.001793   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:50:15.001814   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:50:15.001822   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:50:15.001846   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:50:15.001898   69488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.ha-181800-m03 san=[127.0.0.1 192.168.49.4 ha-181800-m03 localhost minikube]
	I1018 17:50:15.478787   69488 provision.go:177] copyRemoteCerts
	I1018 17:50:15.478855   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:50:15.478897   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:15.499352   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m03/id_rsa Username:docker}
	I1018 17:50:15.670546   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 17:50:15.670610   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:50:15.737652   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 17:50:15.737722   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 17:50:15.785672   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 17:50:15.785736   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:50:15.819920   69488 provision.go:87] duration metric: took 853.956632ms to configureAuth
	I1018 17:50:15.819958   69488 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:50:15.820214   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:15.820332   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:15.865677   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:15.866025   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1018 17:50:15.866041   69488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:50:16.412687   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:50:16.412751   69488 machine.go:96] duration metric: took 5.461676033s to provisionDockerMachine
	I1018 17:50:16.412774   69488 start.go:293] postStartSetup for "ha-181800-m03" (driver="docker")
	I1018 17:50:16.412799   69488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:50:16.412889   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:50:16.413002   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:16.433582   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m03/id_rsa Username:docker}
	I1018 17:50:16.541794   69488 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:50:16.545653   69488 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:50:16.545679   69488 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:50:16.545690   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:50:16.545754   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:50:16.545831   69488 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:50:16.545837   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 17:50:16.545942   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 17:50:16.558126   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:50:16.579067   69488 start.go:296] duration metric: took 166.265226ms for postStartSetup
	I1018 17:50:16.579147   69488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:50:16.579196   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:16.607003   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m03/id_rsa Username:docker}
	I1018 17:50:16.710563   69488 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:50:16.715811   69488 fix.go:56] duration metric: took 6.133736189s for fixHost
	I1018 17:50:16.715839   69488 start.go:83] releasing machines lock for "ha-181800-m03", held for 6.133787135s
	I1018 17:50:16.715904   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m03
	I1018 17:50:16.738713   69488 out.go:179] * Found network options:
	I1018 17:50:16.742042   69488 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1018 17:50:16.745211   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:16.745257   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:16.745281   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:16.745291   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	I1018 17:50:16.745360   69488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:50:16.745415   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:16.745719   69488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:50:16.745787   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:50:16.786710   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m03/id_rsa Username:docker}
	I1018 17:50:16.789091   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m03/id_rsa Username:docker}
	I1018 17:50:17.000059   69488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:50:17.007334   69488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:50:17.007407   69488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:50:17.020749   69488 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:50:17.020771   69488 start.go:495] detecting cgroup driver to use...
	I1018 17:50:17.020801   69488 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:50:17.020860   69488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:50:17.040018   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:50:17.058499   69488 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:50:17.058565   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:50:17.088757   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:50:17.114857   69488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:50:17.279680   69488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:50:17.689048   69488 docker.go:234] disabling docker service ...
	I1018 17:50:17.689168   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:50:17.768854   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:50:17.797881   69488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:50:18.156314   69488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:50:18.369568   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:50:18.394137   69488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:50:18.428969   69488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:50:18.429103   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.447576   69488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:50:18.447692   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.482845   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.510376   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.531315   69488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:50:18.548495   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.563525   69488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.581424   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:18.594509   69488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:50:18.609129   69488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:50:18.621435   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:18.879315   69488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:50:19.151219   69488 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:50:19.151291   69488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:50:19.155163   69488 start.go:563] Will wait 60s for crictl version
	I1018 17:50:19.155231   69488 ssh_runner.go:195] Run: which crictl
	I1018 17:50:19.159144   69488 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:50:19.185150   69488 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:50:19.185237   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:50:19.215107   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:50:19.252641   69488 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:50:19.255663   69488 out.go:179]   - env NO_PROXY=192.168.49.2
	I1018 17:50:19.258473   69488 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1018 17:50:19.261365   69488 cli_runner.go:164] Run: docker network inspect ha-181800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:50:19.278013   69488 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:50:19.282046   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:50:19.291553   69488 mustload.go:65] Loading cluster: ha-181800
	I1018 17:50:19.291792   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:19.292044   69488 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:50:19.308345   69488 host.go:66] Checking if "ha-181800" exists ...
	I1018 17:50:19.308613   69488 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800 for IP: 192.168.49.4
	I1018 17:50:19.308629   69488 certs.go:195] generating shared ca certs ...
	I1018 17:50:19.308644   69488 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:50:19.308750   69488 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:50:19.308801   69488 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:50:19.308811   69488 certs.go:257] generating profile certs ...
	I1018 17:50:19.308888   69488 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key
	I1018 17:50:19.308994   69488 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key.35e78fdb
	I1018 17:50:19.309039   69488 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key
	I1018 17:50:19.309051   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 17:50:19.309064   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 17:50:19.309079   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 17:50:19.309093   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 17:50:19.309106   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 17:50:19.309121   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 17:50:19.309132   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 17:50:19.309147   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 17:50:19.309202   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:50:19.309233   69488 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:50:19.309246   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:50:19.309272   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:50:19.309298   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:50:19.309353   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:50:19.309405   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:50:19.309436   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 17:50:19.309452   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:19.309465   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 17:50:19.309518   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:50:19.326970   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:50:19.425285   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1018 17:50:19.430205   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1018 17:50:19.438544   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1018 17:50:19.442194   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1018 17:50:19.450335   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1018 17:50:19.454272   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1018 17:50:19.462534   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1018 17:50:19.466318   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1018 17:50:19.475475   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1018 17:50:19.479138   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1018 17:50:19.487039   69488 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1018 17:50:19.492406   69488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1018 17:50:19.511212   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:50:19.558261   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:50:19.590631   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:50:19.618816   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:50:19.644073   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 17:50:19.666879   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 17:50:19.688513   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 17:50:19.707989   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 17:50:19.736170   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:50:19.759883   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:50:19.781940   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:50:19.806805   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1018 17:50:19.820301   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1018 17:50:19.837237   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1018 17:50:19.852161   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1018 17:50:19.865774   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1018 17:50:19.879759   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1018 17:50:19.893543   69488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1018 17:50:19.907773   69488 ssh_runner.go:195] Run: openssl version
	I1018 17:50:19.914031   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:50:19.923464   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:50:19.928100   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:50:19.928198   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:50:19.970114   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:50:19.978890   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:50:19.987235   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:50:19.991041   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:50:19.991160   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:50:20.033052   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:50:20.042399   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:50:20.051218   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:20.055291   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:20.055383   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:20.097864   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:50:20.106870   69488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:50:20.111573   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 17:50:20.153811   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 17:50:20.195276   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 17:50:20.242865   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 17:50:20.284917   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 17:50:20.327528   69488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 17:50:20.380629   69488 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1018 17:50:20.380764   69488 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-181800-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:50:20.380810   69488 kube-vip.go:115] generating kube-vip config ...
	I1018 17:50:20.380884   69488 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1018 17:50:20.394557   69488 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1018 17:50:20.394614   69488 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1018 17:50:20.394671   69488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:50:20.404177   69488 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:50:20.404302   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1018 17:50:20.412251   69488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 17:50:20.425311   69488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:50:20.441214   69488 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1018 17:50:20.463677   69488 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 17:50:20.468015   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:50:20.478500   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:20.642164   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:50:20.673908   69488 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 17:50:20.674213   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:20.679253   69488 out.go:179] * Verifying Kubernetes components...
	I1018 17:50:20.682245   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:20.839086   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:50:20.854027   69488 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1018 17:50:20.854101   69488 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1018 17:50:20.854335   69488 node_ready.go:35] waiting up to 6m0s for node "ha-181800-m03" to be "Ready" ...
	W1018 17:50:22.857724   69488 node_ready.go:57] node "ha-181800-m03" has "Ready":"Unknown" status (will retry)
	W1018 17:50:24.858447   69488 node_ready.go:57] node "ha-181800-m03" has "Ready":"Unknown" status (will retry)
	W1018 17:50:26.858609   69488 node_ready.go:57] node "ha-181800-m03" has "Ready":"Unknown" status (will retry)
	W1018 17:50:29.359403   69488 node_ready.go:57] node "ha-181800-m03" has "Ready":"Unknown" status (will retry)
	W1018 17:50:31.859188   69488 node_ready.go:57] node "ha-181800-m03" has "Ready":"Unknown" status (will retry)
	W1018 17:50:34.358228   69488 node_ready.go:57] node "ha-181800-m03" has "Ready":"Unknown" status (will retry)
	I1018 17:50:34.857876   69488 node_ready.go:49] node "ha-181800-m03" is "Ready"
	I1018 17:50:34.857902   69488 node_ready.go:38] duration metric: took 14.003549338s for node "ha-181800-m03" to be "Ready" ...
	I1018 17:50:34.857914   69488 api_server.go:52] waiting for apiserver process to appear ...
	I1018 17:50:34.857973   69488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:50:34.869120   69488 api_server.go:72] duration metric: took 14.194796326s to wait for apiserver process to appear ...
	I1018 17:50:34.869149   69488 api_server.go:88] waiting for apiserver healthz status ...
	I1018 17:50:34.869170   69488 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 17:50:34.878933   69488 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 17:50:34.879871   69488 api_server.go:141] control plane version: v1.34.1
	I1018 17:50:34.879896   69488 api_server.go:131] duration metric: took 10.739864ms to wait for apiserver health ...
	I1018 17:50:34.879915   69488 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 17:50:34.886492   69488 system_pods.go:59] 26 kube-system pods found
	I1018 17:50:34.886536   69488 system_pods.go:61] "coredns-66bc5c9577-f6v2w" [a1fbdf00-9636-43a5-b1ed-a98bcacb5537] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:50:34.886578   69488 system_pods.go:61] "coredns-66bc5c9577-p7nbg" [9d361193-5b45-400e-8161-804fc30e7515] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:50:34.886593   69488 system_pods.go:61] "etcd-ha-181800" [3aafeb42-d09a-4b84-9739-e25adc3a4135] Running
	I1018 17:50:34.886598   69488 system_pods.go:61] "etcd-ha-181800-m02" [194d8d52-b9b6-43ae-8c1f-01b965d3ae96] Running
	I1018 17:50:34.886603   69488 system_pods.go:61] "etcd-ha-181800-m03" [f52cd0ee-6f99-49ba-8c4f-218b8d166fe2] Running
	I1018 17:50:34.886607   69488 system_pods.go:61] "kindnet-72mvm" [5edfc356-9d49-4895-b36a-06c2bd39155a] Running
	I1018 17:50:34.886622   69488 system_pods.go:61] "kindnet-86s8z" [6559ac9e-c73d-4d49-a0e1-87d630e5bec8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 17:50:34.886629   69488 system_pods.go:61] "kindnet-88bv7" [3b3b9715-1e6e-4046-adae-f372381e068a] Running
	I1018 17:50:34.886642   69488 system_pods.go:61] "kindnet-9qbbw" [d1a305ed-4a0e-4ccc-90e0-04577ad4e5c4] Running
	I1018 17:50:34.886646   69488 system_pods.go:61] "kube-apiserver-ha-181800" [4966738e-d055-404d-82ad-0d3f23ef0337] Running
	I1018 17:50:34.886650   69488 system_pods.go:61] "kube-apiserver-ha-181800-m02" [344fc499-0c04-4f86-a919-3c2da1e7a1e6] Running
	I1018 17:50:34.886654   69488 system_pods.go:61] "kube-apiserver-ha-181800-m03" [ce72f944-adc2-46a9-a83c-dc75936c3e9c] Running
	I1018 17:50:34.886659   69488 system_pods.go:61] "kube-controller-manager-ha-181800" [9a4be61b-4ecc-46da-86a1-472b6da720b9] Running
	I1018 17:50:34.886672   69488 system_pods.go:61] "kube-controller-manager-ha-181800-m02" [6a519ce2-92dc-4003-8f1a-6d818fea6da3] Running
	I1018 17:50:34.886679   69488 system_pods.go:61] "kube-controller-manager-ha-181800-m03" [9d247c9d-37a0-4880-8b0a-1134ebb963ab] Running
	I1018 17:50:34.886685   69488 system_pods.go:61] "kube-proxy-dpwpn" [dfabd129-fc36-4d16-ab0f-0b9ecc015712] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 17:50:34.886699   69488 system_pods.go:61] "kube-proxy-fj4ww" [40c5681f-ad11-4e21-a852-5601e2a9fa6e] Running
	I1018 17:50:34.886703   69488 system_pods.go:61] "kube-proxy-qsqmb" [9e100b31-50e5-4d86-a234-0d6277009e98] Running
	I1018 17:50:34.886707   69488 system_pods.go:61] "kube-proxy-stgvm" [15b89226-91ae-478f-acfe-7841776b1377] Running
	I1018 17:50:34.886714   69488 system_pods.go:61] "kube-scheduler-ha-181800" [f4699386-754c-4fa2-8556-174d872d6825] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 17:50:34.886723   69488 system_pods.go:61] "kube-scheduler-ha-181800-m02" [565d55c5-9541-4ef9-a036-3d9ff03f0fa9] Running
	I1018 17:50:34.886727   69488 system_pods.go:61] "kube-scheduler-ha-181800-m03" [4f8687e4-3dbc-4c98-97a4-ab703b016798] Running
	I1018 17:50:34.886732   69488 system_pods.go:61] "kube-vip-ha-181800" [a947f5a9-6257-4ff0-9f73-2d720974668b] Running
	I1018 17:50:34.886739   69488 system_pods.go:61] "kube-vip-ha-181800-m02" [21258022-efed-42fb-b206-89ffcd8d3820] Running
	I1018 17:50:34.886743   69488 system_pods.go:61] "kube-vip-ha-181800-m03" [0087f776-5d07-4c43-906d-c63afc2cc349] Running
	I1018 17:50:34.886747   69488 system_pods.go:61] "storage-provisioner" [3c6521cd-8e1b-46aa-96a3-39e475e1426c] Running
	I1018 17:50:34.886753   69488 system_pods.go:74] duration metric: took 6.831276ms to wait for pod list to return data ...
	I1018 17:50:34.886767   69488 default_sa.go:34] waiting for default service account to be created ...
	I1018 17:50:34.890059   69488 default_sa.go:45] found service account: "default"
	I1018 17:50:34.890090   69488 default_sa.go:55] duration metric: took 3.316408ms for default service account to be created ...
	I1018 17:50:34.890099   69488 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 17:50:34.899064   69488 system_pods.go:86] 26 kube-system pods found
	I1018 17:50:34.899114   69488 system_pods.go:89] "coredns-66bc5c9577-f6v2w" [a1fbdf00-9636-43a5-b1ed-a98bcacb5537] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:50:34.899126   69488 system_pods.go:89] "coredns-66bc5c9577-p7nbg" [9d361193-5b45-400e-8161-804fc30e7515] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 17:50:34.899135   69488 system_pods.go:89] "etcd-ha-181800" [3aafeb42-d09a-4b84-9739-e25adc3a4135] Running
	I1018 17:50:34.899145   69488 system_pods.go:89] "etcd-ha-181800-m02" [194d8d52-b9b6-43ae-8c1f-01b965d3ae96] Running
	I1018 17:50:34.899154   69488 system_pods.go:89] "etcd-ha-181800-m03" [f52cd0ee-6f99-49ba-8c4f-218b8d166fe2] Running
	I1018 17:50:34.899159   69488 system_pods.go:89] "kindnet-72mvm" [5edfc356-9d49-4895-b36a-06c2bd39155a] Running
	I1018 17:50:34.899172   69488 system_pods.go:89] "kindnet-86s8z" [6559ac9e-c73d-4d49-a0e1-87d630e5bec8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 17:50:34.899182   69488 system_pods.go:89] "kindnet-88bv7" [3b3b9715-1e6e-4046-adae-f372381e068a] Running
	I1018 17:50:34.899196   69488 system_pods.go:89] "kindnet-9qbbw" [d1a305ed-4a0e-4ccc-90e0-04577ad4e5c4] Running
	I1018 17:50:34.899202   69488 system_pods.go:89] "kube-apiserver-ha-181800" [4966738e-d055-404d-82ad-0d3f23ef0337] Running
	I1018 17:50:34.899213   69488 system_pods.go:89] "kube-apiserver-ha-181800-m02" [344fc499-0c04-4f86-a919-3c2da1e7a1e6] Running
	I1018 17:50:34.899223   69488 system_pods.go:89] "kube-apiserver-ha-181800-m03" [ce72f944-adc2-46a9-a83c-dc75936c3e9c] Running
	I1018 17:50:34.899228   69488 system_pods.go:89] "kube-controller-manager-ha-181800" [9a4be61b-4ecc-46da-86a1-472b6da720b9] Running
	I1018 17:50:34.899243   69488 system_pods.go:89] "kube-controller-manager-ha-181800-m02" [6a519ce2-92dc-4003-8f1a-6d818fea6da3] Running
	I1018 17:50:34.899249   69488 system_pods.go:89] "kube-controller-manager-ha-181800-m03" [9d247c9d-37a0-4880-8b0a-1134ebb963ab] Running
	I1018 17:50:34.899260   69488 system_pods.go:89] "kube-proxy-dpwpn" [dfabd129-fc36-4d16-ab0f-0b9ecc015712] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 17:50:34.899271   69488 system_pods.go:89] "kube-proxy-fj4ww" [40c5681f-ad11-4e21-a852-5601e2a9fa6e] Running
	I1018 17:50:34.899276   69488 system_pods.go:89] "kube-proxy-qsqmb" [9e100b31-50e5-4d86-a234-0d6277009e98] Running
	I1018 17:50:34.899281   69488 system_pods.go:89] "kube-proxy-stgvm" [15b89226-91ae-478f-acfe-7841776b1377] Running
	I1018 17:50:34.899294   69488 system_pods.go:89] "kube-scheduler-ha-181800" [f4699386-754c-4fa2-8556-174d872d6825] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 17:50:34.899303   69488 system_pods.go:89] "kube-scheduler-ha-181800-m02" [565d55c5-9541-4ef9-a036-3d9ff03f0fa9] Running
	I1018 17:50:34.899308   69488 system_pods.go:89] "kube-scheduler-ha-181800-m03" [4f8687e4-3dbc-4c98-97a4-ab703b016798] Running
	I1018 17:50:34.899312   69488 system_pods.go:89] "kube-vip-ha-181800" [a947f5a9-6257-4ff0-9f73-2d720974668b] Running
	I1018 17:50:34.899323   69488 system_pods.go:89] "kube-vip-ha-181800-m02" [21258022-efed-42fb-b206-89ffcd8d3820] Running
	I1018 17:50:34.899327   69488 system_pods.go:89] "kube-vip-ha-181800-m03" [0087f776-5d07-4c43-906d-c63afc2cc349] Running
	I1018 17:50:34.899331   69488 system_pods.go:89] "storage-provisioner" [3c6521cd-8e1b-46aa-96a3-39e475e1426c] Running
	I1018 17:50:34.899338   69488 system_pods.go:126] duration metric: took 9.233497ms to wait for k8s-apps to be running ...
	I1018 17:50:34.899350   69488 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 17:50:34.899417   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 17:50:34.917250   69488 system_svc.go:56] duration metric: took 17.889347ms WaitForService to wait for kubelet
	I1018 17:50:34.917280   69488 kubeadm.go:586] duration metric: took 14.242961018s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:50:34.917312   69488 node_conditions.go:102] verifying NodePressure condition ...
	I1018 17:50:34.921584   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:34.921618   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:34.921629   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:34.921635   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:34.921640   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:34.921644   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:34.921648   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:34.921652   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:34.921657   69488 node_conditions.go:105] duration metric: took 4.33997ms to run NodePressure ...
	I1018 17:50:34.921672   69488 start.go:241] waiting for startup goroutines ...
	I1018 17:50:34.921695   69488 start.go:255] writing updated cluster config ...
	I1018 17:50:34.925146   69488 out.go:203] 
	I1018 17:50:34.928178   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:34.928377   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:50:34.931719   69488 out.go:179] * Starting "ha-181800-m04" worker node in "ha-181800" cluster
	I1018 17:50:34.934625   69488 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:50:34.937723   69488 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:50:34.940621   69488 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:50:34.940656   69488 cache.go:58] Caching tarball of preloaded images
	I1018 17:50:34.940709   69488 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:50:34.940775   69488 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 17:50:34.940787   69488 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 17:50:34.940923   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:50:34.962521   69488 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 17:50:34.962544   69488 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 17:50:34.962563   69488 cache.go:232] Successfully downloaded all kic artifacts
	I1018 17:50:34.962587   69488 start.go:360] acquireMachinesLock for ha-181800-m04: {Name:mkde4f18de8924439f6b0cc4435fbaf784c3faa2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 17:50:34.962654   69488 start.go:364] duration metric: took 47.016µs to acquireMachinesLock for "ha-181800-m04"
	I1018 17:50:34.962676   69488 start.go:96] Skipping create...Using existing machine configuration
	I1018 17:50:34.962691   69488 fix.go:54] fixHost starting: m04
	I1018 17:50:34.962948   69488 cli_runner.go:164] Run: docker container inspect ha-181800-m04 --format={{.State.Status}}
	I1018 17:50:34.980810   69488 fix.go:112] recreateIfNeeded on ha-181800-m04: state=Stopped err=<nil>
	W1018 17:50:34.980838   69488 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 17:50:34.984164   69488 out.go:252] * Restarting existing docker container for "ha-181800-m04" ...
	I1018 17:50:34.984251   69488 cli_runner.go:164] Run: docker start ha-181800-m04
	I1018 17:50:35.315737   69488 cli_runner.go:164] Run: docker container inspect ha-181800-m04 --format={{.State.Status}}
	I1018 17:50:35.337160   69488 kic.go:430] container "ha-181800-m04" state is running.
	I1018 17:50:35.337590   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m04
	I1018 17:50:35.363433   69488 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/config.json ...
	I1018 17:50:35.363682   69488 machine.go:93] provisionDockerMachine start ...
	I1018 17:50:35.363737   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:35.394986   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:35.395304   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1018 17:50:35.395315   69488 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 17:50:35.396115   69488 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 17:50:38.582281   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m04
	
	I1018 17:50:38.582366   69488 ubuntu.go:182] provisioning hostname "ha-181800-m04"
	I1018 17:50:38.582470   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:38.612842   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:38.613162   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1018 17:50:38.613175   69488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-181800-m04 && echo "ha-181800-m04" | sudo tee /etc/hostname
	I1018 17:50:38.824220   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-181800-m04
	
	I1018 17:50:38.824341   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:38.867678   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:38.867969   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1018 17:50:38.867985   69488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-181800-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-181800-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-181800-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 17:50:39.054604   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 17:50:39.054689   69488 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 17:50:39.054718   69488 ubuntu.go:190] setting up certificates
	I1018 17:50:39.054753   69488 provision.go:84] configureAuth start
	I1018 17:50:39.054834   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m04
	I1018 17:50:39.086058   69488 provision.go:143] copyHostCerts
	I1018 17:50:39.086092   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:50:39.086123   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 17:50:39.086130   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 17:50:39.086205   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 17:50:39.086277   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:50:39.086294   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 17:50:39.086298   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 17:50:39.086323   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 17:50:39.086360   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:50:39.086376   69488 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 17:50:39.086380   69488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 17:50:39.086403   69488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 17:50:39.086448   69488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.ha-181800-m04 san=[127.0.0.1 192.168.49.5 ha-181800-m04 localhost minikube]
	I1018 17:50:39.468879   69488 provision.go:177] copyRemoteCerts
	I1018 17:50:39.469042   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 17:50:39.469105   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:39.488386   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m04/id_rsa Username:docker}
	I1018 17:50:39.624142   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 17:50:39.624201   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 17:50:39.661469   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 17:50:39.661533   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 17:50:39.687551   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 17:50:39.687610   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 17:50:39.714808   69488 provision.go:87] duration metric: took 660.019137ms to configureAuth
	I1018 17:50:39.714833   69488 ubuntu.go:206] setting minikube options for container-runtime
	I1018 17:50:39.715059   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:39.715179   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:39.744352   69488 main.go:141] libmachine: Using SSH client type: native
	I1018 17:50:39.744665   69488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1018 17:50:39.744680   69488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 17:50:40.169343   69488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 17:50:40.169451   69488 machine.go:96] duration metric: took 4.805759657s to provisionDockerMachine
	I1018 17:50:40.169476   69488 start.go:293] postStartSetup for "ha-181800-m04" (driver="docker")
	I1018 17:50:40.169509   69488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 17:50:40.169593   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 17:50:40.169660   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:40.199327   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m04/id_rsa Username:docker}
	I1018 17:50:40.309268   69488 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 17:50:40.313860   69488 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 17:50:40.313893   69488 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 17:50:40.313903   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 17:50:40.313963   69488 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 17:50:40.314046   69488 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 17:50:40.314057   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 17:50:40.314164   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 17:50:40.322086   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:50:40.345649   69488 start.go:296] duration metric: took 176.137258ms for postStartSetup
	I1018 17:50:40.345726   69488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:50:40.345765   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:40.367346   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m04/id_rsa Username:docker}
	I1018 17:50:40.476066   69488 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 17:50:40.481571   69488 fix.go:56] duration metric: took 5.518874256s for fixHost
	I1018 17:50:40.481594   69488 start.go:83] releasing machines lock for "ha-181800-m04", held for 5.518929354s
	I1018 17:50:40.481667   69488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m04
	I1018 17:50:40.518678   69488 out.go:179] * Found network options:
	I1018 17:50:40.522829   69488 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1018 17:50:40.526545   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:40.526576   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:40.526587   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:40.526609   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:40.526619   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	W1018 17:50:40.526628   69488 proxy.go:120] fail to check proxy env: Error ip not in block
	I1018 17:50:40.526702   69488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 17:50:40.526739   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:40.526991   69488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 17:50:40.527047   69488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:50:40.564877   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m04/id_rsa Username:docker}
	I1018 17:50:40.572778   69488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m04/id_rsa Username:docker}
	I1018 17:50:40.812088   69488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 17:50:40.818560   69488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 17:50:40.818643   69488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 17:50:40.827770   69488 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 17:50:40.827794   69488 start.go:495] detecting cgroup driver to use...
	I1018 17:50:40.827830   69488 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 17:50:40.827881   69488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 17:50:40.844762   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 17:50:40.859855   69488 docker.go:218] disabling cri-docker service (if available) ...
	I1018 17:50:40.859920   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 17:50:40.877123   69488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 17:50:40.901442   69488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 17:50:41.039508   69488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 17:50:41.185848   69488 docker.go:234] disabling docker service ...
	I1018 17:50:41.185936   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 17:50:41.204077   69488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 17:50:41.219382   69488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 17:50:41.421847   69488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 17:50:41.682651   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 17:50:41.704546   69488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 17:50:41.722306   69488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 17:50:41.722376   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.737444   69488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 17:50:41.737564   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.753240   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.765254   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.778891   69488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 17:50:41.788840   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.799676   69488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.810022   69488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 17:50:41.820591   69488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 17:50:41.828788   69488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 17:50:41.838483   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:41.972124   69488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 17:50:42.178891   69488 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 17:50:42.178980   69488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 17:50:42.184242   69488 start.go:563] Will wait 60s for crictl version
	I1018 17:50:42.184331   69488 ssh_runner.go:195] Run: which crictl
	I1018 17:50:42.191980   69488 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 17:50:42.224462   69488 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 17:50:42.224630   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:50:42.261636   69488 ssh_runner.go:195] Run: crio --version
	I1018 17:50:42.307376   69488 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 17:50:42.310676   69488 out.go:179]   - env NO_PROXY=192.168.49.2
	I1018 17:50:42.313598   69488 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1018 17:50:42.316600   69488 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1018 17:50:42.319690   69488 cli_runner.go:164] Run: docker network inspect ha-181800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 17:50:42.337639   69488 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 17:50:42.341794   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:50:42.354387   69488 mustload.go:65] Loading cluster: ha-181800
	I1018 17:50:42.354632   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:42.354880   69488 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:50:42.375574   69488 host.go:66] Checking if "ha-181800" exists ...
	I1018 17:50:42.375851   69488 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800 for IP: 192.168.49.5
	I1018 17:50:42.375865   69488 certs.go:195] generating shared ca certs ...
	I1018 17:50:42.375878   69488 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 17:50:42.375994   69488 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 17:50:42.376039   69488 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 17:50:42.376053   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 17:50:42.376065   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 17:50:42.376082   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 17:50:42.376099   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 17:50:42.376158   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 17:50:42.376191   69488 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 17:50:42.376202   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 17:50:42.376227   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 17:50:42.376253   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 17:50:42.376280   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 17:50:42.376328   69488 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 17:50:42.376359   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:42.376376   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 17:50:42.376390   69488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 17:50:42.376442   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 17:50:42.395447   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 17:50:42.416556   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 17:50:42.438126   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 17:50:42.461131   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 17:50:42.491460   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 17:50:42.516977   69488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 17:50:42.546320   69488 ssh_runner.go:195] Run: openssl version
	I1018 17:50:42.554579   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 17:50:42.566626   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:42.570900   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:42.570969   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 17:50:42.623862   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 17:50:42.634866   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 17:50:42.645108   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 17:50:42.655323   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 17:50:42.655394   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 17:50:42.704646   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 17:50:42.713644   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 17:50:42.722573   69488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 17:50:42.726769   69488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 17:50:42.726843   69488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 17:50:42.784245   69488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 17:50:42.792405   69488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 17:50:42.803513   69488 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 17:50:42.803579   69488 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.34.1 crio false true} ...
	I1018 17:50:42.803680   69488 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-181800-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-181800 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 17:50:42.803759   69488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 17:50:42.812894   69488 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 17:50:42.813002   69488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1018 17:50:42.821266   69488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 17:50:42.839760   69488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 17:50:42.859184   69488 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1018 17:50:42.864035   69488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 17:50:42.875123   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:43.006572   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:50:43.022917   69488 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1018 17:50:43.023313   69488 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:50:43.026393   69488 out.go:179] * Verifying Kubernetes components...
	I1018 17:50:43.029360   69488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 17:50:43.176018   69488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 17:50:43.195799   69488 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1018 17:50:43.195926   69488 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1018 17:50:43.196200   69488 node_ready.go:35] waiting up to 6m0s for node "ha-181800-m04" to be "Ready" ...
	W1018 17:50:45.201538   69488 node_ready.go:57] node "ha-181800-m04" has "Ready":"Unknown" status (will retry)
	W1018 17:50:47.702556   69488 node_ready.go:57] node "ha-181800-m04" has "Ready":"Unknown" status (will retry)
	W1018 17:50:50.201440   69488 node_ready.go:57] node "ha-181800-m04" has "Ready":"Unknown" status (will retry)
	I1018 17:50:50.700371   69488 node_ready.go:49] node "ha-181800-m04" is "Ready"
	I1018 17:50:50.700396   69488 node_ready.go:38] duration metric: took 7.50415906s for node "ha-181800-m04" to be "Ready" ...
	I1018 17:50:50.700408   69488 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 17:50:50.700467   69488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 17:50:50.718400   69488 system_svc.go:56] duration metric: took 17.984135ms WaitForService to wait for kubelet
	I1018 17:50:50.718432   69488 kubeadm.go:586] duration metric: took 7.695467215s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 17:50:50.718449   69488 node_conditions.go:102] verifying NodePressure condition ...
	I1018 17:50:50.722731   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:50.722761   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:50.722774   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:50.722779   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:50.722783   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:50.722787   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:50.722791   69488 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 17:50:50.722795   69488 node_conditions.go:123] node cpu capacity is 2
	I1018 17:50:50.722799   69488 node_conditions.go:105] duration metric: took 4.345599ms to run NodePressure ...
	I1018 17:50:50.722811   69488 start.go:241] waiting for startup goroutines ...
	I1018 17:50:50.722837   69488 start.go:255] writing updated cluster config ...
	I1018 17:50:50.723159   69488 ssh_runner.go:195] Run: rm -f paused
	I1018 17:50:50.727229   69488 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 17:50:50.727747   69488 kapi.go:59] client config for ha-181800: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/ha-181800/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 17:50:50.750070   69488 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-f6v2w" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 17:50:52.756554   69488 pod_ready.go:104] pod "coredns-66bc5c9577-f6v2w" is not "Ready", error: <nil>
	W1018 17:50:54.757224   69488 pod_ready.go:104] pod "coredns-66bc5c9577-f6v2w" is not "Ready", error: <nil>
	I1018 17:50:55.872324   69488 pod_ready.go:94] pod "coredns-66bc5c9577-f6v2w" is "Ready"
	I1018 17:50:55.872348   69488 pod_ready.go:86] duration metric: took 5.122247372s for pod "coredns-66bc5c9577-f6v2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.872359   69488 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p7nbg" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.891895   69488 pod_ready.go:94] pod "coredns-66bc5c9577-p7nbg" is "Ready"
	I1018 17:50:55.891959   69488 pod_ready.go:86] duration metric: took 19.593189ms for pod "coredns-66bc5c9577-p7nbg" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.900138   69488 pod_ready.go:83] waiting for pod "etcd-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.913638   69488 pod_ready.go:94] pod "etcd-ha-181800" is "Ready"
	I1018 17:50:55.913660   69488 pod_ready.go:86] duration metric: took 13.499842ms for pod "etcd-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.913670   69488 pod_ready.go:83] waiting for pod "etcd-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.920519   69488 pod_ready.go:94] pod "etcd-ha-181800-m02" is "Ready"
	I1018 17:50:55.920596   69488 pod_ready.go:86] duration metric: took 6.91899ms for pod "etcd-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.920619   69488 pod_ready.go:83] waiting for pod "etcd-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:55.954930   69488 pod_ready.go:94] pod "etcd-ha-181800-m03" is "Ready"
	I1018 17:50:55.955010   69488 pod_ready.go:86] duration metric: took 34.368453ms for pod "etcd-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:56.150428   69488 request.go:683] "Waited before sending request" delay="195.256268ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1018 17:50:56.154502   69488 pod_ready.go:83] waiting for pod "kube-apiserver-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:56.350745   69488 request.go:683] "Waited before sending request" delay="196.132391ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181800"
	I1018 17:50:56.551187   69488 request.go:683] "Waited before sending request" delay="197.298856ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800"
	I1018 17:50:56.554146   69488 pod_ready.go:94] pod "kube-apiserver-ha-181800" is "Ready"
	I1018 17:50:56.554177   69488 pod_ready.go:86] duration metric: took 399.650322ms for pod "kube-apiserver-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:56.554188   69488 pod_ready.go:83] waiting for pod "kube-apiserver-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:56.750528   69488 request.go:683] "Waited before sending request" delay="196.269246ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181800-m02"
	I1018 17:50:56.951191   69488 request.go:683] "Waited before sending request" delay="191.312029ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m02"
	I1018 17:50:56.954528   69488 pod_ready.go:94] pod "kube-apiserver-ha-181800-m02" is "Ready"
	I1018 17:50:56.954555   69488 pod_ready.go:86] duration metric: took 400.360633ms for pod "kube-apiserver-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:56.954567   69488 pod_ready.go:83] waiting for pod "kube-apiserver-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:57.150777   69488 request.go:683] "Waited before sending request" delay="196.132408ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-181800-m03"
	I1018 17:50:57.350632   69488 request.go:683] "Waited before sending request" delay="196.3256ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m03"
	I1018 17:50:57.354249   69488 pod_ready.go:94] pod "kube-apiserver-ha-181800-m03" is "Ready"
	I1018 17:50:57.354277   69488 pod_ready.go:86] duration metric: took 399.70318ms for pod "kube-apiserver-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:57.550692   69488 request.go:683] "Waited before sending request" delay="196.326346ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1018 17:50:57.554682   69488 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:57.750932   69488 request.go:683] "Waited before sending request" delay="196.156235ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181800"
	I1018 17:50:57.951083   69488 request.go:683] "Waited before sending request" delay="179.305539ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800"
	I1018 17:50:57.954373   69488 pod_ready.go:94] pod "kube-controller-manager-ha-181800" is "Ready"
	I1018 17:50:57.954402   69488 pod_ready.go:86] duration metric: took 399.688608ms for pod "kube-controller-manager-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:57.954412   69488 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:58.150687   69488 request.go:683] "Waited before sending request" delay="196.203982ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181800-m02"
	I1018 17:50:58.351259   69488 request.go:683] "Waited before sending request" delay="197.229423ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m02"
	I1018 17:50:58.354427   69488 pod_ready.go:94] pod "kube-controller-manager-ha-181800-m02" is "Ready"
	I1018 17:50:58.354451   69488 pod_ready.go:86] duration metric: took 400.032752ms for pod "kube-controller-manager-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:58.354461   69488 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:58.550867   69488 request.go:683] "Waited before sending request" delay="196.323713ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-181800-m03"
	I1018 17:50:58.751164   69488 request.go:683] "Waited before sending request" delay="196.337531ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m03"
	I1018 17:50:58.754290   69488 pod_ready.go:94] pod "kube-controller-manager-ha-181800-m03" is "Ready"
	I1018 17:50:58.754318   69488 pod_ready.go:86] duration metric: took 399.850398ms for pod "kube-controller-manager-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:58.950697   69488 request.go:683] "Waited before sending request" delay="196.290137ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1018 17:50:58.954553   69488 pod_ready.go:83] waiting for pod "kube-proxy-dpwpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:59.150998   69488 request.go:683] "Waited before sending request" delay="196.346368ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dpwpn"
	I1018 17:50:59.350617   69488 request.go:683] "Waited before sending request" delay="195.289755ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m02"
	I1018 17:50:59.353848   69488 pod_ready.go:94] pod "kube-proxy-dpwpn" is "Ready"
	I1018 17:50:59.353878   69488 pod_ready.go:86] duration metric: took 399.293025ms for pod "kube-proxy-dpwpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:59.353888   69488 pod_ready.go:83] waiting for pod "kube-proxy-fj4ww" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:59.550367   69488 request.go:683] "Waited before sending request" delay="196.374503ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fj4ww"
	I1018 17:50:59.751156   69488 request.go:683] "Waited before sending request" delay="197.148429ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m04"
	I1018 17:50:59.754407   69488 pod_ready.go:94] pod "kube-proxy-fj4ww" is "Ready"
	I1018 17:50:59.754437   69488 pod_ready.go:86] duration metric: took 400.541386ms for pod "kube-proxy-fj4ww" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:59.754446   69488 pod_ready.go:83] waiting for pod "kube-proxy-qsqmb" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:50:59.950755   69488 request.go:683] "Waited before sending request" delay="196.237656ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qsqmb"
	I1018 17:51:00.158458   69488 request.go:683] "Waited before sending request" delay="204.154018ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m03"
	I1018 17:51:00.170490   69488 pod_ready.go:94] pod "kube-proxy-qsqmb" is "Ready"
	I1018 17:51:00.170526   69488 pod_ready.go:86] duration metric: took 416.072575ms for pod "kube-proxy-qsqmb" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:00.170537   69488 pod_ready.go:83] waiting for pod "kube-proxy-stgvm" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:00.350837   69488 request.go:683] "Waited before sending request" delay="180.202158ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-stgvm"
	I1018 17:51:00.550600   69488 request.go:683] "Waited before sending request" delay="195.396062ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800"
	I1018 17:51:00.553989   69488 pod_ready.go:94] pod "kube-proxy-stgvm" is "Ready"
	I1018 17:51:00.554026   69488 pod_ready.go:86] duration metric: took 383.481925ms for pod "kube-proxy-stgvm" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:00.750322   69488 request.go:683] "Waited before sending request" delay="196.164105ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1018 17:51:00.754581   69488 pod_ready.go:83] waiting for pod "kube-scheduler-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:00.951090   69488 request.go:683] "Waited before sending request" delay="196.343135ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181800"
	I1018 17:51:01.151207   69488 request.go:683] "Waited before sending request" delay="196.368472ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800"
	I1018 17:51:01.154780   69488 pod_ready.go:94] pod "kube-scheduler-ha-181800" is "Ready"
	I1018 17:51:01.154809   69488 pod_ready.go:86] duration metric: took 400.156865ms for pod "kube-scheduler-ha-181800" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:01.154820   69488 pod_ready.go:83] waiting for pod "kube-scheduler-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:01.351014   69488 request.go:683] "Waited before sending request" delay="196.125229ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181800-m02"
	I1018 17:51:01.550334   69488 request.go:683] "Waited before sending request" delay="195.254374ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m02"
	I1018 17:51:01.553462   69488 pod_ready.go:94] pod "kube-scheduler-ha-181800-m02" is "Ready"
	I1018 17:51:01.553533   69488 pod_ready.go:86] duration metric: took 398.706213ms for pod "kube-scheduler-ha-181800-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:01.553558   69488 pod_ready.go:83] waiting for pod "kube-scheduler-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:01.750793   69488 request.go:683] "Waited before sending request" delay="197.139116ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-181800-m03"
	I1018 17:51:01.951100   69488 request.go:683] "Waited before sending request" delay="196.302232ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-181800-m03"
	I1018 17:51:01.954435   69488 pod_ready.go:94] pod "kube-scheduler-ha-181800-m03" is "Ready"
	I1018 17:51:01.954463   69488 pod_ready.go:86] duration metric: took 400.885736ms for pod "kube-scheduler-ha-181800-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 17:51:01.954476   69488 pod_ready.go:40] duration metric: took 11.227212191s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 17:51:02.019798   69488 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 17:51:02.023234   69488 out.go:179] * Done! kubectl is now configured to use "ha-181800" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.572124206Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3818bf02-e1ec-45e5-8db2-98e9f6e8000a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.573451845Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=bdb883a0-d1f7-44fb-bec3-c90a1d2ecb55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.573727681Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.584989537Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.585193183Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/87a35d3c6fccfe095ac3771dcbde81fc5df65bc9200469d9386fd64ba3708913/merged/etc/passwd: no such file or directory"
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.585221163Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/87a35d3c6fccfe095ac3771dcbde81fc5df65bc9200469d9386fd64ba3708913/merged/etc/group: no such file or directory"
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.585494192Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.609702849Z" level=info msg="Created container 3955a976d16cdd5db102930c28bfc2c48f3fd22d0d8f4186e30edecd860f23fd: kube-system/storage-provisioner/storage-provisioner" id=bdb883a0-d1f7-44fb-bec3-c90a1d2ecb55 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.610857892Z" level=info msg="Starting container: 3955a976d16cdd5db102930c28bfc2c48f3fd22d0d8f4186e30edecd860f23fd" id=4f969c9f-8845-4412-b24f-e780eb6068e8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 17:50:45 ha-181800 crio[665]: time="2025-10-18T17:50:45.615041848Z" level=info msg="Started container" PID=1488 containerID=3955a976d16cdd5db102930c28bfc2c48f3fd22d0d8f4186e30edecd860f23fd description=kube-system/storage-provisioner/storage-provisioner id=4f969c9f-8845-4412-b24f-e780eb6068e8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9d76fad66ab674fdb6d96a586ff07b63771e9f80ffb0da6d960f75270994737e
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.473504065Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.479286252Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.479449553Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.479659115Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.500865649Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.502400176Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.502551702Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.511806492Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.511960258Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.51203262Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.515388889Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.515422391Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.515444882Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.526060264Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 17:50:54 ha-181800 crio[665]: time="2025-10-18T17:50:54.526097122Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	3955a976d16cd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   2 minutes ago       Running             storage-provisioner       3                   9d76fad66ab67       storage-provisioner                 kube-system
	b70649f38d4c7       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   2 minutes ago       Running             busybox                   2                   2d6e6e05d930c       busybox-7b57f96db7-fbwpv            default
	244a77fe1563d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   2 minutes ago       Running             coredns                   2                   ac0ef71240719       coredns-66bc5c9577-p7nbg            kube-system
	45c33b76be4e1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 minutes ago       Running             kindnet-cni               2                   0e97ce88bd2d3       kindnet-72mvm                       kube-system
	8aea864f19933       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 minutes ago       Running             kube-proxy                2                   c1b0887367928       kube-proxy-stgvm                    kube-system
	6d80af764ee06       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   2 minutes ago       Running             coredns                   2                   ed23b1fbdbbb3       coredns-66bc5c9577-f6v2w            kube-system
	f2f15c809753a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   2 minutes ago       Exited              storage-provisioner       2                   9d76fad66ab67       storage-provisioner                 kube-system
	4cff6e37b85af       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   2 minutes ago       Running             kube-controller-manager   8                   c14a7cc20dbd7       kube-controller-manager-ha-181800   kube-system
	787ba7d1db588       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   2 minutes ago       Running             kube-apiserver            8                   aedac42fff114       kube-apiserver-ha-181800            kube-system
	bd6f9d7be6037       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   3 minutes ago       Exited              kube-controller-manager   7                   c14a7cc20dbd7       kube-controller-manager-ha-181800   kube-system
	7df0159a16497       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   3 minutes ago       Exited              kube-apiserver            7                   aedac42fff114       kube-apiserver-ha-181800            kube-system
	8d49f8f056288       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   4 minutes ago       Running             etcd                      2                   c5458ae9aa01d       etcd-ha-181800                      kube-system
	42139c5070f82       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   4 minutes ago       Running             kube-vip                  1                   ac5de0631c6c9       kube-vip-ha-181800                  kube-system
	fb83e2f9880f4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   4 minutes ago       Running             kube-scheduler            2                   042db5c7b2fa5       kube-scheduler-ha-181800            kube-system
	
	
	==> coredns [244a77fe1563d266b1c09476ad0f3463ffeb31f96c85ba703ffe04a24a967497] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42812 - 40298 "HINFO IN 6519948929031597716.8341788919287889456. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016440056s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [6d80af764ee0602bdd0407c66fcc9de24c8b7b254f4ce667725e048906d15a87] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35970 - 34760 "HINFO IN 4620377952315927478.2937315152384107880. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029628682s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-181800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T17_33_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:33:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:52:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 17:52:43 +0000   Sat, 18 Oct 2025 17:33:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 17:52:43 +0000   Sat, 18 Oct 2025 17:33:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 17:52:43 +0000   Sat, 18 Oct 2025 17:33:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 17:52:43 +0000   Sat, 18 Oct 2025 17:34:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-181800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                7dc9b150-98ed-4d4d-b680-5759a1e067a9
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-fbwpv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 coredns-66bc5c9577-f6v2w             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     19m
	  kube-system                 coredns-66bc5c9577-p7nbg             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     19m
	  kube-system                 etcd-ha-181800                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         19m
	  kube-system                 kindnet-72mvm                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-ha-181800             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-181800    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-stgvm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-181800             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-181800                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 19m                    kube-proxy       
	  Normal   Starting                 2m31s                  kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  19m (x8 over 19m)      kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m (x8 over 19m)      kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19m (x8 over 19m)      kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     19m                    kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   Starting                 19m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 19m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  19m                    kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m                    kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           19m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   RegisteredNode           18m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   NodeReady                18m                    kubelet          Node ha-181800 status is now: NodeReady
	  Normal   RegisteredNode           17m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)      kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           11m                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   Starting                 4m30s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m30s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m30s (x8 over 4m30s)  kubelet          Node ha-181800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m30s (x8 over 4m30s)  kubelet          Node ha-181800 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m30s (x8 over 4m30s)  kubelet          Node ha-181800 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m35s                  node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   RegisteredNode           2m30s                  node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   RegisteredNode           2m6s                   node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	  Normal   RegisteredNode           59s                    node-controller  Node ha-181800 event: Registered Node ha-181800 in Controller
	
	
	Name:               ha-181800-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T17_34_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:34:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:52:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 17:51:10 +0000   Sat, 18 Oct 2025 17:50:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 17:51:10 +0000   Sat, 18 Oct 2025 17:50:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 17:51:10 +0000   Sat, 18 Oct 2025 17:50:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 17:51:10 +0000   Sat, 18 Oct 2025 17:50:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-181800-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                b2dd8f24-78e0-4eba-8b0c-b12412f7af7d
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-cp9q6                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-ha-181800-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         18m
	  kube-system                 kindnet-86s8z                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      18m
	  kube-system                 kube-apiserver-ha-181800-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-ha-181800-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-dpwpn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-ha-181800-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-vip-ha-181800-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 18m                    kube-proxy       
	  Normal   Starting                 2m2s                   kube-proxy       
	  Normal   RegisteredNode           18m                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           18m                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-181800-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  15m (x9 over 15m)      kubelet          Node ha-181800-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-181800-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeNotReady             14m                    node-controller  Node ha-181800-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        14m                    kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           13m                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   NodeNotReady             10m                    node-controller  Node ha-181800-m02 status is now: NodeNotReady
	  Warning  CgroupV1                 4m28s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m28s (x8 over 4m28s)  kubelet          Node ha-181800-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m28s (x8 over 4m28s)  kubelet          Node ha-181800-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m28s (x8 over 4m28s)  kubelet          Node ha-181800-m02 status is now: NodeHasSufficientPID
	  Warning  ContainerGCFailed        3m28s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m35s                  node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           2m30s                  node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           2m6s                   node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	  Normal   RegisteredNode           59s                    node-controller  Node ha-181800-m02 event: Registered Node ha-181800-m02 in Controller
	
	
	Name:               ha-181800-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T17_35_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:35:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:52:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 17:51:35 +0000   Sat, 18 Oct 2025 17:50:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 17:51:35 +0000   Sat, 18 Oct 2025 17:50:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 17:51:35 +0000   Sat, 18 Oct 2025 17:50:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 17:51:35 +0000   Sat, 18 Oct 2025 17:50:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-181800-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                4a1abf8a-63a3-4737-81ec-1878616c489b
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-lzcbm                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-ha-181800-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-9qbbw                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-ha-181800-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-181800-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-qsqmb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-181800-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-181800-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 2m9s                   kube-proxy       
	  Normal   RegisteredNode           17m                    node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   NodeNotReady             10m                    node-controller  Node ha-181800-m03 status is now: NodeNotReady
	  Normal   RegisteredNode           2m35s                  node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Warning  CgroupV1                 2m34s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 2m34s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m34s (x8 over 2m34s)  kubelet          Node ha-181800-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m34s (x8 over 2m34s)  kubelet          Node ha-181800-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m34s (x8 over 2m34s)  kubelet          Node ha-181800-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m30s                  node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   RegisteredNode           2m6s                   node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	  Normal   RegisteredNode           59s                    node-controller  Node ha-181800-m03 event: Registered Node ha-181800-m03 in Controller
	
	
	Name:               ha-181800-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T17_36_11_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:36:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:52:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 17:52:42 +0000   Sat, 18 Oct 2025 17:50:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 17:52:42 +0000   Sat, 18 Oct 2025 17:50:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 17:52:42 +0000   Sat, 18 Oct 2025 17:50:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 17:52:42 +0000   Sat, 18 Oct 2025 17:50:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-181800-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                afc79373-b3a1-4495-8f28-5c3685ad131e
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-88bv7       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-proxy-fj4ww    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 16m                   kube-proxy       
	  Normal   Starting                 110s                  kube-proxy       
	  Normal   Starting                 16m                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     16m (x3 over 16m)     kubelet          Node ha-181800-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x3 over 16m)     kubelet          Node ha-181800-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  16m (x3 over 16m)     kubelet          Node ha-181800-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           16m                   node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           16m                   node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           16m                   node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   NodeReady                15m                   kubelet          Node ha-181800-m04 status is now: NodeReady
	  Normal   RegisteredNode           13m                   node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           11m                   node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   NodeNotReady             10m                   node-controller  Node ha-181800-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           2m35s                 node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           2m30s                 node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Warning  CgroupV1                 2m10s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 2m10s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m7s (x8 over 2m10s)  kubelet          Node ha-181800-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m7s (x8 over 2m10s)  kubelet          Node ha-181800-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m7s (x8 over 2m10s)  kubelet          Node ha-181800-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m6s                  node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	  Normal   RegisteredNode           59s                   node-controller  Node ha-181800-m04 event: Registered Node ha-181800-m04 in Controller
	
	
	Name:               ha-181800-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-181800-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=ha-181800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_18T17_51_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 17:51:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-181800-m05
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 17:52:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 17:52:39 +0000   Sat, 18 Oct 2025 17:51:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 17:52:39 +0000   Sat, 18 Oct 2025 17:51:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 17:52:39 +0000   Sat, 18 Oct 2025 17:51:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 17:52:39 +0000   Sat, 18 Oct 2025 17:52:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.6
	  Hostname:    ha-181800-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                79f1696c-3016-4cac-b220-5cfbf18101cc
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-181800-m05                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         50s
	  kube-system                 kindnet-mtzkz                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      58s
	  kube-system                 kube-apiserver-ha-181800-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-controller-manager-ha-181800-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-proxy-7xkff                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-scheduler-ha-181800-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-vip-ha-181800-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        46s   kube-proxy       
	  Normal  RegisteredNode  56s   node-controller  Node ha-181800-m05 event: Registered Node ha-181800-m05 in Controller
	  Normal  RegisteredNode  55s   node-controller  Node ha-181800-m05 event: Registered Node ha-181800-m05 in Controller
	  Normal  RegisteredNode  55s   node-controller  Node ha-181800-m05 event: Registered Node ha-181800-m05 in Controller
	  Normal  RegisteredNode  54s   node-controller  Node ha-181800-m05 event: Registered Node ha-181800-m05 in Controller
	
	
	==> dmesg <==
	[Oct18 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014995] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.499206] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035776] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.808632] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.418900] kauditd_printk_skb: 36 callbacks suppressed
	[Oct18 17:12] overlayfs: idmapped layers are currently not supported
	[  +0.082393] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct18 17:18] overlayfs: idmapped layers are currently not supported
	[Oct18 17:19] overlayfs: idmapped layers are currently not supported
	[Oct18 17:33] overlayfs: idmapped layers are currently not supported
	[ +35.716082] overlayfs: idmapped layers are currently not supported
	[Oct18 17:35] overlayfs: idmapped layers are currently not supported
	[Oct18 17:36] overlayfs: idmapped layers are currently not supported
	[Oct18 17:37] overlayfs: idmapped layers are currently not supported
	[Oct18 17:39] overlayfs: idmapped layers are currently not supported
	[  +3.088699] overlayfs: idmapped layers are currently not supported
	[Oct18 17:48] overlayfs: idmapped layers are currently not supported
	[  +2.594489] overlayfs: idmapped layers are currently not supported
	[Oct18 17:50] overlayfs: idmapped layers are currently not supported
	[ +42.240353] overlayfs: idmapped layers are currently not supported
	[Oct18 17:51] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8d49f8f05628805a90b3d99b19810fe13d13747bb11c8daf730344aef4d339f6] <==
	{"level":"info","ts":"2025-10-18T17:51:37.693855Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"b3fe458a773b8b53"}
	{"level":"info","ts":"2025-10-18T17:51:37.819629Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3fe458a773b8b53"}
	{"level":"info","ts":"2025-10-18T17:51:37.820697Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b3fe458a773b8b53","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-10-18T17:51:37.820770Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3fe458a773b8b53"}
	{"level":"info","ts":"2025-10-18T17:51:37.820816Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3fe458a773b8b53"}
	{"level":"info","ts":"2025-10-18T17:51:37.870200Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b3fe458a773b8b53"}
	{"level":"info","ts":"2025-10-18T17:51:37.886047Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b3fe458a773b8b53","stream-type":"stream Message"}
	{"level":"info","ts":"2025-10-18T17:51:37.886138Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b3fe458a773b8b53"}
	{"level":"info","ts":"2025-10-18T17:51:49.723288Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-10-18T17:51:50.425890Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-10-18T17:51:50.566233Z","caller":"traceutil/trace.go:172","msg":"trace[1582315109] linearizableReadLoop","detail":"{readStateIndex:4544; appliedIndex:4547; }","duration":"129.885952ms","start":"2025-10-18T17:51:50.436305Z","end":"2025-10-18T17:51:50.566191Z","steps":["trace[1582315109] 'read index received'  (duration: 129.876918ms)","trace[1582315109] 'applied index is now lower than readState.Index'  (duration: 7.918µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T17:51:50.566502Z","caller":"traceutil/trace.go:172","msg":"trace[2120789788] transaction","detail":"{read_only:false; number_of_response:1; response_revision:3873; }","duration":"106.818583ms","start":"2025-10-18T17:51:50.459672Z","end":"2025-10-18T17:51:50.566491Z","steps":["trace[2120789788] 'process raft request'  (duration: 10.606923ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T17:51:50.568362Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"149.520857ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-hmlzh\" limit:1 ","response":"range_response_count:1 size:3816"}
	{"level":"info","ts":"2025-10-18T17:51:50.569616Z","caller":"traceutil/trace.go:172","msg":"trace[486774353] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-hmlzh; range_end:; response_count:1; response_revision:3873; }","duration":"150.782757ms","start":"2025-10-18T17:51:50.418818Z","end":"2025-10-18T17:51:50.569601Z","steps":["trace[486774353] 'agreement among raft nodes before linearized reading'  (duration: 149.204012ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T17:51:50.570055Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.847465ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-vlnvb\" limit:1 ","response":"range_response_count:1 size:4085"}
	{"level":"info","ts":"2025-10-18T17:51:50.570504Z","caller":"traceutil/trace.go:172","msg":"trace[396383788] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-vlnvb; range_end:; response_count:1; response_revision:3873; }","duration":"154.944812ms","start":"2025-10-18T17:51:50.415188Z","end":"2025-10-18T17:51:50.570133Z","steps":["trace[396383788] 'agreement among raft nodes before linearized reading'  (duration: 154.739353ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T17:51:50.571391Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.201124ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-n8km8\" limit:1 ","response":"range_response_count:1 size:4073"}
	{"level":"info","ts":"2025-10-18T17:51:50.571488Z","caller":"traceutil/trace.go:172","msg":"trace[2053080079] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-n8km8; range_end:; response_count:1; response_revision:3873; }","duration":"156.308695ms","start":"2025-10-18T17:51:50.415167Z","end":"2025-10-18T17:51:50.571476Z","steps":["trace[2053080079] 'agreement among raft nodes before linearized reading'  (duration: 156.020116ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T17:51:50.596699Z","caller":"traceutil/trace.go:172","msg":"trace[1611105105] transaction","detail":"{read_only:false; number_of_response:1; response_revision:3874; }","duration":"135.872917ms","start":"2025-10-18T17:51:50.460812Z","end":"2025-10-18T17:51:50.596685Z","steps":["trace[1611105105] 'process raft request'  (duration: 135.612943ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T17:51:50.600504Z","caller":"traceutil/trace.go:172","msg":"trace[1690400551] transaction","detail":"{read_only:false; number_of_response:1; response_revision:3875; }","duration":"138.675645ms","start":"2025-10-18T17:51:50.461815Z","end":"2025-10-18T17:51:50.600491Z","steps":["trace[1690400551] 'process raft request'  (duration: 138.434789ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T17:51:50.610397Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.891213ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T17:51:50.616155Z","caller":"traceutil/trace.go:172","msg":"trace[509590566] range","detail":"{range_begin:/registry/replicasets; range_end:; response_count:0; response_revision:3882; }","duration":"150.960039ms","start":"2025-10-18T17:51:50.459486Z","end":"2025-10-18T17:51:50.610446Z","steps":["trace[509590566] 'agreement among raft nodes before linearized reading'  (duration: 150.875221ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T17:51:59.350226Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-10-18T17:52:00.580917Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-10-18T17:52:07.308970Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"b3fe458a773b8b53","bytes":7190412,"size":"7.2 MB","took":"31.744859176s"}
	
	
	==> kernel <==
	 17:52:47 up  1:35,  0 user,  load average: 5.46, 3.37, 1.92
	Linux ha-181800 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [45c33b76be4e1c5e61c683306b76aeb0fcbfda863ba2562aee4d85f222728470] <==
	I1018 17:52:24.479387       1 main.go:324] Node ha-181800-m02 has CIDR [10.244.1.0/24] 
	I1018 17:52:24.479480       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1018 17:52:24.479516       1 main.go:324] Node ha-181800-m03 has CIDR [10.244.2.0/24] 
	I1018 17:52:24.479617       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 17:52:24.479650       1 main.go:324] Node ha-181800-m04 has CIDR [10.244.3.0/24] 
	I1018 17:52:34.471311       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:52:34.471423       1 main.go:301] handling current node
	I1018 17:52:34.471448       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 17:52:34.471456       1 main.go:324] Node ha-181800-m02 has CIDR [10.244.1.0/24] 
	I1018 17:52:34.471625       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1018 17:52:34.471638       1 main.go:324] Node ha-181800-m03 has CIDR [10.244.2.0/24] 
	I1018 17:52:34.471702       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 17:52:34.471716       1 main.go:324] Node ha-181800-m04 has CIDR [10.244.3.0/24] 
	I1018 17:52:34.471772       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1018 17:52:34.471777       1 main.go:324] Node ha-181800-m05 has CIDR [10.244.4.0/24] 
	I1018 17:52:44.471012       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 17:52:44.471152       1 main.go:301] handling current node
	I1018 17:52:44.471195       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1018 17:52:44.471207       1 main.go:324] Node ha-181800-m02 has CIDR [10.244.1.0/24] 
	I1018 17:52:44.471458       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1018 17:52:44.471474       1 main.go:324] Node ha-181800-m03 has CIDR [10.244.2.0/24] 
	I1018 17:52:44.471557       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1018 17:52:44.471569       1 main.go:324] Node ha-181800-m04 has CIDR [10.244.3.0/24] 
	I1018 17:52:44.471634       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1018 17:52:44.471652       1 main.go:324] Node ha-181800-m05 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [787ba7d1db5885d5987b39cc564271b65d0c3534789595970e69e1fc2af692fa] <==
	I1018 17:50:08.637365       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 17:50:08.648586       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 17:50:08.649478       1 aggregator.go:171] initial CRD sync complete...
	I1018 17:50:08.658365       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 17:50:08.658478       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 17:50:08.658528       1 cache.go:39] Caches are synced for autoregister controller
	I1018 17:50:08.648742       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 17:50:08.660408       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 17:50:08.685820       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 17:50:08.685952       1 policy_source.go:240] refreshing policies
	I1018 17:50:08.705489       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1018 17:50:08.711819       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 17:50:08.721543       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 17:50:08.729935       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 17:50:08.730318       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 17:50:08.730492       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 17:50:08.730520       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 17:50:08.730960       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 17:50:08.746648       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 17:50:08.747504       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 17:50:09.243989       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 17:50:13.235609       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 17:50:36.709527       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 17:50:36.815877       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 17:50:46.351258       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [7df0159a16497989a32ac40623e8901229679b8716e6b590b84a0d3e1054f4d6] <==
	I1018 17:49:21.128362       1 server.go:150] Version: v1.34.1
	I1018 17:49:21.128401       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1018 17:49:22.017042       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1018 17:49:22.017075       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1018 17:49:22.017084       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1018 17:49:22.017089       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1018 17:49:22.017094       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1018 17:49:22.017098       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1018 17:49:22.017103       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1018 17:49:22.017107       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1018 17:49:22.017111       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1018 17:49:22.017116       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1018 17:49:22.017120       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1018 17:49:22.017125       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1018 17:49:22.035548       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 17:49:22.037326       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1018 17:49:22.037937       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1018 17:49:22.044391       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 17:49:22.056396       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1018 17:49:22.056496       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1018 17:49:22.056813       1 instance.go:239] Using reconciler: lease
	W1018 17:49:22.058127       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 17:49:42.034705       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 17:49:42.036960       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1018 17:49:42.058557       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [4cff6e37b85af70621f4b47faf3b854223fcae935be9ad45a9a99a523f33574b] <==
	I1018 17:50:17.475715       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 17:50:17.477471       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 17:50:17.478740       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800-m03"
	I1018 17:50:17.478810       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800-m04"
	I1018 17:50:17.478834       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800"
	I1018 17:50:17.478868       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800-m02"
	I1018 17:50:17.479116       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 17:50:17.483521       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 17:50:17.491656       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 17:50:17.491691       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 17:50:17.491699       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 17:50:17.491580       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 17:50:17.503394       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 17:50:17.508362       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 17:50:17.509154       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 17:50:50.411726       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-181800-m04"
	I1018 17:50:55.780269       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-kgtwl EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-kgtwl\": the object has been modified; please apply your changes to the latest version and try again"
	I1018 17:50:55.782431       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"9f28e5d3-f804-46e7-b8a3-f9f96165b245", APIVersion:"v1", ResourceVersion:"306", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-kgtwl EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-kgtwl": the object has been modified; please apply your changes to the latest version and try again
	E1018 17:50:55.860481       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1018 17:51:48.671222       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-9lnm7 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-9lnm7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1018 17:51:49.455195       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-181800-m05\" does not exist"
	I1018 17:51:49.455335       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-181800-m04"
	I1018 17:51:49.522527       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-181800-m05" podCIDRs=["10.244.4.0/24"]
	I1018 17:51:52.547695       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-181800-m05"
	I1018 17:52:39.400298       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-181800-m04"
	
	
	==> kube-controller-manager [bd6f9d7be603729a0a5200b910dc4c63002c84e58b83cb98debb890cf0bf202d] <==
	I1018 17:49:24.964069       1 serving.go:386] Generated self-signed cert in-memory
	I1018 17:49:25.434782       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1018 17:49:25.434808       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 17:49:25.436324       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 17:49:25.436542       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1018 17:49:25.436706       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1018 17:49:25.436723       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 17:49:45.439754       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-proxy [8aea864f19933a28597488b60aa422e08bea2bfd07e84bd2fec57087062dc95f] <==
	I1018 17:50:15.663641       1 server_linux.go:53] "Using iptables proxy"
	I1018 17:50:16.334903       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 17:50:16.464013       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 17:50:16.464050       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 17:50:16.464138       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 17:50:16.493669       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 17:50:16.493728       1 server_linux.go:132] "Using iptables Proxier"
	I1018 17:50:16.497992       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 17:50:16.498301       1 server.go:527] "Version info" version="v1.34.1"
	I1018 17:50:16.498377       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 17:50:16.507101       1 config.go:200] "Starting service config controller"
	I1018 17:50:16.507206       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 17:50:16.507258       1 config.go:106] "Starting endpoint slice config controller"
	I1018 17:50:16.507322       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 17:50:16.507360       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 17:50:16.507388       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 17:50:16.510070       1 config.go:309] "Starting node config controller"
	I1018 17:50:16.510095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 17:50:16.510103       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 17:50:16.607760       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 17:50:16.607802       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 17:50:16.607844       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fb83e2f9880f48e77ccba9ff1a0240a5eacc8c5f0b7758c70e7c19289ba8795a] <==
	E1018 17:51:49.799031       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dl6h7\": pod kube-proxy-dl6h7 is already assigned to node \"ha-181800-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dl6h7" node="ha-181800-m05"
	E1018 17:51:49.799084       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 67c84e61-5f4b-4055-badb-7e5e5a8d6d59(kube-system/kube-proxy-dl6h7) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-dl6h7"
	E1018 17:51:49.799106       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dl6h7\": pod kube-proxy-dl6h7 is already assigned to node \"ha-181800-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-dl6h7"
	I1018 17:51:49.804755       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-dl6h7" node="ha-181800-m05"
	E1018 17:51:49.847992       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-652ws\": pod kindnet-652ws is already assigned to node \"ha-181800-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-652ws" node="ha-181800-m05"
	E1018 17:51:49.848050       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 46966a03-2286-4bd7-84d6-3b294dde0b19(kube-system/kindnet-652ws) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-652ws"
	E1018 17:51:49.848070       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-652ws\": pod kindnet-652ws is already assigned to node \"ha-181800-m05\"" logger="UnhandledError" pod="kube-system/kindnet-652ws"
	I1018 17:51:49.855992       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-652ws" node="ha-181800-m05"
	E1018 17:51:49.900256       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5t7kh\": pod kube-proxy-5t7kh is already assigned to node \"ha-181800-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5t7kh" node="ha-181800-m05"
	E1018 17:51:49.900306       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 732edcc6-d4d3-4a0e-b760-33a4fa7eb2a5(kube-system/kube-proxy-5t7kh) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-5t7kh"
	E1018 17:51:49.900326       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5t7kh\": pod kube-proxy-5t7kh is already assigned to node \"ha-181800-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-5t7kh"
	I1018 17:51:49.916504       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5t7kh" node="ha-181800-m05"
	E1018 17:51:50.349815       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-j5wff\": pod kube-proxy-j5wff is already assigned to node \"ha-181800-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-j5wff" node="ha-181800-m05"
	E1018 17:51:50.349866       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod b9575532-0422-4f41-8630-cc21dd86b88d(kube-system/kube-proxy-j5wff) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-j5wff"
	E1018 17:51:50.349885       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-j5wff\": pod kube-proxy-j5wff is already assigned to node \"ha-181800-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-j5wff"
	E1018 17:51:50.350097       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-n8km8\": pod kindnet-n8km8 is already assigned to node \"ha-181800-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-n8km8" node="ha-181800-m05"
	E1018 17:51:50.350120       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 3423ff09-8c27-4f2f-a971-f5e03ff5f1f3(kube-system/kindnet-n8km8) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-n8km8"
	I1018 17:51:50.354530       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-j5wff" node="ha-181800-m05"
	E1018 17:51:50.356637       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-n8km8\": pod kindnet-n8km8 is already assigned to node \"ha-181800-m05\"" logger="UnhandledError" pod="kube-system/kindnet-n8km8"
	I1018 17:51:50.356682       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-n8km8" node="ha-181800-m05"
	E1018 17:51:59.811705       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-wg66d\": pod kube-proxy-wg66d is already assigned to node \"ha-181800-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-wg66d" node="ha-181800-m05"
	E1018 17:51:59.811780       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-wg66d\": pod kube-proxy-wg66d is already assigned to node \"ha-181800-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-wg66d"
	I1018 17:51:59.811805       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-wg66d" node="ha-181800-m05"
	E1018 17:51:59.953333       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7xkff\": pod kube-proxy-7xkff is already assigned to node \"ha-181800-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7xkff" node="ha-181800-m05"
	E1018 17:51:59.953417       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7xkff\": pod kube-proxy-7xkff is already assigned to node \"ha-181800-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-7xkff"
	
	
	==> kubelet <==
	Oct 18 17:50:12 ha-181800 kubelet[798]: I1018 17:50:12.842479     798 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: E1018 17:50:12.856112     798 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-181800\" already exists" pod="kube-system/kube-controller-manager-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: I1018 17:50:12.856349     798 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: E1018 17:50:12.867959     798 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-181800\" already exists" pod="kube-system/kube-scheduler-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: I1018 17:50:12.868003     798 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: E1018 17:50:12.881408     798 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-vip-ha-181800\" already exists" pod="kube-system/kube-vip-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: I1018 17:50:12.881451     798 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-ha-181800"
	Oct 18 17:50:12 ha-181800 kubelet[798]: E1018 17:50:12.896352     798 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-181800\" already exists" pod="kube-system/etcd-ha-181800"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.091654     798 apiserver.go:52] "Watching apiserver"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.099077     798 scope.go:117] "RemoveContainer" containerID="bd6f9d7be603729a0a5200b910dc4c63002c84e58b83cb98debb890cf0bf202d"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.216894     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5edfc356-9d49-4895-b36a-06c2bd39155a-xtables-lock\") pod \"kindnet-72mvm\" (UID: \"5edfc356-9d49-4895-b36a-06c2bd39155a\") " pod="kube-system/kindnet-72mvm"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.217054     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15b89226-91ae-478f-acfe-7841776b1377-xtables-lock\") pod \"kube-proxy-stgvm\" (UID: \"15b89226-91ae-478f-acfe-7841776b1377\") " pod="kube-system/kube-proxy-stgvm"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.217077     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15b89226-91ae-478f-acfe-7841776b1377-lib-modules\") pod \"kube-proxy-stgvm\" (UID: \"15b89226-91ae-478f-acfe-7841776b1377\") " pod="kube-system/kube-proxy-stgvm"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.217093     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3c6521cd-8e1b-46aa-96a3-39e475e1426c-tmp\") pod \"storage-provisioner\" (UID: \"3c6521cd-8e1b-46aa-96a3-39e475e1426c\") " pod="kube-system/storage-provisioner"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.217110     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5edfc356-9d49-4895-b36a-06c2bd39155a-cni-cfg\") pod \"kindnet-72mvm\" (UID: \"5edfc356-9d49-4895-b36a-06c2bd39155a\") " pod="kube-system/kindnet-72mvm"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.217127     798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5edfc356-9d49-4895-b36a-06c2bd39155a-lib-modules\") pod \"kindnet-72mvm\" (UID: \"5edfc356-9d49-4895-b36a-06c2bd39155a\") " pod="kube-system/kindnet-72mvm"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.222063     798 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 17:50:13 ha-181800 kubelet[798]: I1018 17:50:13.266801     798 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 17:50:13 ha-181800 kubelet[798]: W1018 17:50:13.559633     798 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/crio-c1b08873679284c397e63dc0b5e86a2778290edfaa47a2d3af86e787870c2624 WatchSource:0}: Error finding container c1b08873679284c397e63dc0b5e86a2778290edfaa47a2d3af86e787870c2624: Status 404 returned error can't find the container with id c1b08873679284c397e63dc0b5e86a2778290edfaa47a2d3af86e787870c2624
	Oct 18 17:50:13 ha-181800 kubelet[798]: W1018 17:50:13.569533     798 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/crio-0e97ce88bd2d3a36101a0a9930710ba30f34091e61ed0ed0249bd68b5d0f6fa7 WatchSource:0}: Error finding container 0e97ce88bd2d3a36101a0a9930710ba30f34091e61ed0ed0249bd68b5d0f6fa7: Status 404 returned error can't find the container with id 0e97ce88bd2d3a36101a0a9930710ba30f34091e61ed0ed0249bd68b5d0f6fa7
	Oct 18 17:50:13 ha-181800 kubelet[798]: W1018 17:50:13.789592     798 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/crio-2d6e6e05d930c610e9ac4942479166d3061f0b37055dbc9645478f2923f1ff53 WatchSource:0}: Error finding container 2d6e6e05d930c610e9ac4942479166d3061f0b37055dbc9645478f2923f1ff53: Status 404 returned error can't find the container with id 2d6e6e05d930c610e9ac4942479166d3061f0b37055dbc9645478f2923f1ff53
	Oct 18 17:50:17 ha-181800 kubelet[798]: E1018 17:50:17.091585     798 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/351deab77f22682d337e98537451625e6f5def60ef97378fe2ea489cd9cb173d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/351deab77f22682d337e98537451625e6f5def60ef97378fe2ea489cd9cb173d/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-ha-181800_9656c3d6ff12279b641632c7e3275a8a/kube-controller-manager/6.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-ha-181800_9656c3d6ff12279b641632c7e3275a8a/kube-controller-manager/6.log: no such file or directory
	Oct 18 17:50:17 ha-181800 kubelet[798]: E1018 17:50:17.097904     798 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3a8ceae8950ea9bca2bf6a05f4cb7633f55f4458c755f32741110642edbfd7ba/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3a8ceae8950ea9bca2bf6a05f4cb7633f55f4458c755f32741110642edbfd7ba/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-apiserver-ha-181800_f173b0166ea7317b529b58e20ef8d65f/kube-apiserver/6.log" to get inode usage: stat /var/log/pods/kube-system_kube-apiserver-ha-181800_f173b0166ea7317b529b58e20ef8d65f/kube-apiserver/6.log: no such file or directory
	Oct 18 17:50:17 ha-181800 kubelet[798]: E1018 17:50:17.148404     798 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/crio/crio-dad8e190116effc9294125133d608015a4f2ec86c95f308f26d5e4d771de4985\": RecentStats: unable to find data in memory cache]"
	Oct 18 17:50:45 ha-181800 kubelet[798]: I1018 17:50:45.570659     798 scope.go:117] "RemoveContainer" containerID="f2f15c809753a0cd811b332e6f6a8f9b5be888da593a2286ff085903e5ec3a12"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-181800 -n ha-181800
helpers_test.go:269: (dbg) Run:  kubectl --context ha-181800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-310292 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-310292 --output=json --user=testUser: exit status 80 (1.768577564s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"384daebb-1bf3-4e84-a3cb-60f2d5ceac97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-310292 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"2a4540d1-b46a-4e9b-b1ae-d0107887aafe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-18T17:54:31Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"0b77986c-21aa-4336-900d-b7282a12d09d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-310292 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.77s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.21s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-310292 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-310292 --output=json --user=testUser: exit status 80 (2.209573443s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0a523b6a-764f-4eb2-a778-6f46004785e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-310292 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"437f203c-f187-4a5b-b7c6-d12fe19d47ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-18T17:54:33Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"73ac912a-2d6b-4f1b-a77d-41adc7646c69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-310292 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.21s)

                                                
                                    
x
+
TestPause/serial/Pause (7.79s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-321903 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-321903 --alsologtostderr -v=5: exit status 80 (2.158264975s)

                                                
                                                
-- stdout --
	* Pausing node pause-321903 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 18:17:11.114109  182978 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:17:11.114996  182978 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:17:11.115032  182978 out.go:374] Setting ErrFile to fd 2...
	I1018 18:17:11.115153  182978 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:17:11.115462  182978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:17:11.115777  182978 out.go:368] Setting JSON to false
	I1018 18:17:11.115828  182978 mustload.go:65] Loading cluster: pause-321903
	I1018 18:17:11.116289  182978 config.go:182] Loaded profile config "pause-321903": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:17:11.116835  182978 cli_runner.go:164] Run: docker container inspect pause-321903 --format={{.State.Status}}
	I1018 18:17:11.149174  182978 host.go:66] Checking if "pause-321903" exists ...
	I1018 18:17:11.149515  182978 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:17:11.250060  182978 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-18 18:17:11.240715108 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:17:11.250721  182978 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-321903 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 18:17:11.254405  182978 out.go:179] * Pausing node pause-321903 ... 
	I1018 18:17:11.257375  182978 host.go:66] Checking if "pause-321903" exists ...
	I1018 18:17:11.257688  182978 ssh_runner.go:195] Run: systemctl --version
	I1018 18:17:11.257740  182978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-321903
	I1018 18:17:11.276704  182978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/pause-321903/id_rsa Username:docker}
	I1018 18:17:11.383339  182978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:17:11.397000  182978 pause.go:52] kubelet running: true
	I1018 18:17:11.397073  182978 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 18:17:11.672216  182978 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 18:17:11.672305  182978 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 18:17:11.758315  182978 cri.go:89] found id: "1230f9e5cf3e9675dbf837caca1f6efe823b10055878ee033b6d97fdec3c4a73"
	I1018 18:17:11.758336  182978 cri.go:89] found id: "509dbb4e9ee8a1d111116d82883b8bcaf502fb1be8d2a0478c3a9c6a300aa9c9"
	I1018 18:17:11.758340  182978 cri.go:89] found id: "b1f5de574a87e23c6882f0de27620ef3b5dcc7d2dd2cca428ed032ed283b7f17"
	I1018 18:17:11.758344  182978 cri.go:89] found id: "0a87401b26f8fe5eca4265d3f61980cf50be35c9ad6297578a3c1117545e88e9"
	I1018 18:17:11.758348  182978 cri.go:89] found id: "4afe595d4901a5c13fe23c2f892b2c0e58181ac48b98ee859ee099da4d4a1607"
	I1018 18:17:11.758351  182978 cri.go:89] found id: "e27366c82d5cb638c304a969a81298d5df85aada0411f1d79cdf701c215ca024"
	I1018 18:17:11.758355  182978 cri.go:89] found id: "657241d3f85a70a79a49eceb02219d308003745e67fb44fd088e3e9c4b8e4772"
	I1018 18:17:11.758358  182978 cri.go:89] found id: "b0ab4ec6d0c28b9d0f51329318dcec692b7c5e3207c1daea9fe798392dcf0b44"
	I1018 18:17:11.758361  182978 cri.go:89] found id: "d8d01459be672ec4fd8b85084bd40212b1672ef1c3a3885c3617da35e9c4fb8b"
	I1018 18:17:11.758367  182978 cri.go:89] found id: "fb3fca7cd1009ed922f3f234148284e89a8be61f5965386518f93c0ab5ecbb2d"
	I1018 18:17:11.758370  182978 cri.go:89] found id: "2438a9ff996c0d245fc840f1c24567b79350ac528f1466e447512ce99687f671"
	I1018 18:17:11.758373  182978 cri.go:89] found id: "fe01b2bdff4a1ff4347ebb1876a7fe94ea12cf41798722170b21b95c0ea7477c"
	I1018 18:17:11.758376  182978 cri.go:89] found id: "9b891540c5deca63a1228b886f79648205e27476e41e85a70e1a582b416d1b3f"
	I1018 18:17:11.758379  182978 cri.go:89] found id: "e1cc985af8447c811346ce79c30a2b8fa589396bae84a02969101095e0145ae8"
	I1018 18:17:11.758382  182978 cri.go:89] found id: ""
	I1018 18:17:11.758428  182978 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 18:17:11.770676  182978 retry.go:31] will retry after 184.089939ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:17:11Z" level=error msg="open /run/runc: no such file or directory"
	I1018 18:17:11.955065  182978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:17:11.968544  182978 pause.go:52] kubelet running: false
	I1018 18:17:11.968607  182978 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 18:17:12.170823  182978 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 18:17:12.170909  182978 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 18:17:12.251236  182978 cri.go:89] found id: "1230f9e5cf3e9675dbf837caca1f6efe823b10055878ee033b6d97fdec3c4a73"
	I1018 18:17:12.251260  182978 cri.go:89] found id: "509dbb4e9ee8a1d111116d82883b8bcaf502fb1be8d2a0478c3a9c6a300aa9c9"
	I1018 18:17:12.251265  182978 cri.go:89] found id: "b1f5de574a87e23c6882f0de27620ef3b5dcc7d2dd2cca428ed032ed283b7f17"
	I1018 18:17:12.251269  182978 cri.go:89] found id: "0a87401b26f8fe5eca4265d3f61980cf50be35c9ad6297578a3c1117545e88e9"
	I1018 18:17:12.251273  182978 cri.go:89] found id: "4afe595d4901a5c13fe23c2f892b2c0e58181ac48b98ee859ee099da4d4a1607"
	I1018 18:17:12.251277  182978 cri.go:89] found id: "e27366c82d5cb638c304a969a81298d5df85aada0411f1d79cdf701c215ca024"
	I1018 18:17:12.251280  182978 cri.go:89] found id: "657241d3f85a70a79a49eceb02219d308003745e67fb44fd088e3e9c4b8e4772"
	I1018 18:17:12.251283  182978 cri.go:89] found id: "b0ab4ec6d0c28b9d0f51329318dcec692b7c5e3207c1daea9fe798392dcf0b44"
	I1018 18:17:12.251286  182978 cri.go:89] found id: "d8d01459be672ec4fd8b85084bd40212b1672ef1c3a3885c3617da35e9c4fb8b"
	I1018 18:17:12.251293  182978 cri.go:89] found id: "fb3fca7cd1009ed922f3f234148284e89a8be61f5965386518f93c0ab5ecbb2d"
	I1018 18:17:12.251297  182978 cri.go:89] found id: "2438a9ff996c0d245fc840f1c24567b79350ac528f1466e447512ce99687f671"
	I1018 18:17:12.251300  182978 cri.go:89] found id: "fe01b2bdff4a1ff4347ebb1876a7fe94ea12cf41798722170b21b95c0ea7477c"
	I1018 18:17:12.251304  182978 cri.go:89] found id: "9b891540c5deca63a1228b886f79648205e27476e41e85a70e1a582b416d1b3f"
	I1018 18:17:12.251307  182978 cri.go:89] found id: "e1cc985af8447c811346ce79c30a2b8fa589396bae84a02969101095e0145ae8"
	I1018 18:17:12.251311  182978 cri.go:89] found id: ""
	I1018 18:17:12.251360  182978 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 18:17:12.265484  182978 retry.go:31] will retry after 524.806224ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:17:12Z" level=error msg="open /run/runc: no such file or directory"
	I1018 18:17:12.791228  182978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:17:12.806170  182978 pause.go:52] kubelet running: false
	I1018 18:17:12.806239  182978 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 18:17:13.040630  182978 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 18:17:13.040717  182978 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 18:17:13.155041  182978 cri.go:89] found id: "1230f9e5cf3e9675dbf837caca1f6efe823b10055878ee033b6d97fdec3c4a73"
	I1018 18:17:13.155060  182978 cri.go:89] found id: "509dbb4e9ee8a1d111116d82883b8bcaf502fb1be8d2a0478c3a9c6a300aa9c9"
	I1018 18:17:13.155065  182978 cri.go:89] found id: "b1f5de574a87e23c6882f0de27620ef3b5dcc7d2dd2cca428ed032ed283b7f17"
	I1018 18:17:13.155069  182978 cri.go:89] found id: "0a87401b26f8fe5eca4265d3f61980cf50be35c9ad6297578a3c1117545e88e9"
	I1018 18:17:13.155073  182978 cri.go:89] found id: "4afe595d4901a5c13fe23c2f892b2c0e58181ac48b98ee859ee099da4d4a1607"
	I1018 18:17:13.155087  182978 cri.go:89] found id: "e27366c82d5cb638c304a969a81298d5df85aada0411f1d79cdf701c215ca024"
	I1018 18:17:13.155090  182978 cri.go:89] found id: "657241d3f85a70a79a49eceb02219d308003745e67fb44fd088e3e9c4b8e4772"
	I1018 18:17:13.155093  182978 cri.go:89] found id: "b0ab4ec6d0c28b9d0f51329318dcec692b7c5e3207c1daea9fe798392dcf0b44"
	I1018 18:17:13.155097  182978 cri.go:89] found id: "d8d01459be672ec4fd8b85084bd40212b1672ef1c3a3885c3617da35e9c4fb8b"
	I1018 18:17:13.155103  182978 cri.go:89] found id: "fb3fca7cd1009ed922f3f234148284e89a8be61f5965386518f93c0ab5ecbb2d"
	I1018 18:17:13.155106  182978 cri.go:89] found id: "2438a9ff996c0d245fc840f1c24567b79350ac528f1466e447512ce99687f671"
	I1018 18:17:13.155109  182978 cri.go:89] found id: "fe01b2bdff4a1ff4347ebb1876a7fe94ea12cf41798722170b21b95c0ea7477c"
	I1018 18:17:13.155112  182978 cri.go:89] found id: "9b891540c5deca63a1228b886f79648205e27476e41e85a70e1a582b416d1b3f"
	I1018 18:17:13.155124  182978 cri.go:89] found id: "e1cc985af8447c811346ce79c30a2b8fa589396bae84a02969101095e0145ae8"
	I1018 18:17:13.155128  182978 cri.go:89] found id: ""
	I1018 18:17:13.155189  182978 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 18:17:13.169651  182978 out.go:203] 
	W1018 18:17:13.172448  182978 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:17:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:17:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 18:17:13.172471  182978 out.go:285] * 
	* 
	W1018 18:17:13.180966  182978 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 18:17:13.183640  182978 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-321903 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-321903
helpers_test.go:243: (dbg) docker inspect pause-321903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b28ca1d5f7a6e3974217e890bb848cfb7a7dcd4a61a8b67db2a23bfdc31c260e",
	        "Created": "2025-10-18T18:15:13.869574866Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 173319,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T18:15:13.931176394Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/b28ca1d5f7a6e3974217e890bb848cfb7a7dcd4a61a8b67db2a23bfdc31c260e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b28ca1d5f7a6e3974217e890bb848cfb7a7dcd4a61a8b67db2a23bfdc31c260e/hostname",
	        "HostsPath": "/var/lib/docker/containers/b28ca1d5f7a6e3974217e890bb848cfb7a7dcd4a61a8b67db2a23bfdc31c260e/hosts",
	        "LogPath": "/var/lib/docker/containers/b28ca1d5f7a6e3974217e890bb848cfb7a7dcd4a61a8b67db2a23bfdc31c260e/b28ca1d5f7a6e3974217e890bb848cfb7a7dcd4a61a8b67db2a23bfdc31c260e-json.log",
	        "Name": "/pause-321903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-321903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-321903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b28ca1d5f7a6e3974217e890bb848cfb7a7dcd4a61a8b67db2a23bfdc31c260e",
	                "LowerDir": "/var/lib/docker/overlay2/f6ede9998230dc5f3d47fa3e062dd23465a972ca8e4778d15ec5d10aa3b1adc3-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f6ede9998230dc5f3d47fa3e062dd23465a972ca8e4778d15ec5d10aa3b1adc3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f6ede9998230dc5f3d47fa3e062dd23465a972ca8e4778d15ec5d10aa3b1adc3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f6ede9998230dc5f3d47fa3e062dd23465a972ca8e4778d15ec5d10aa3b1adc3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-321903",
	                "Source": "/var/lib/docker/volumes/pause-321903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-321903",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-321903",
	                "name.minikube.sigs.k8s.io": "pause-321903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "45b88b0a13e61b90e10101caeb283a4693bd9723436a3af9b522953035c38a5d",
	            "SandboxKey": "/var/run/docker/netns/45b88b0a13e6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33018"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33019"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33022"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33020"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33021"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-321903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:84:36:61:a3:92",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "784ab4e1778c53d0a27ab91d3e6988f16075fb6e81844b257b18551c5e24185c",
	                    "EndpointID": "969352ab4c0d6bc7fc1246e3d15da385c0034e9327783d662ae5173c9f1284b6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-321903",
	                        "b28ca1d5f7a6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-321903 -n pause-321903
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-321903 -n pause-321903: exit status 2 (439.231103ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-321903 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-321903 logs -n 25: (1.738019075s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-111074 sudo systemctl cat kubelet --no-pager                                                     │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo systemctl status docker --all --full --no-pager                                      │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo systemctl cat docker --no-pager                                                      │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cat /etc/docker/daemon.json                                                          │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo docker system info                                                                   │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cri-dockerd --version                                                                │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo systemctl cat containerd --no-pager                                                  │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cat /etc/containerd/config.toml                                                      │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo containerd config dump                                                               │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo systemctl status crio --all --full --no-pager                                        │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo systemctl cat crio --no-pager                                                        │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo crio config                                                                          │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ delete  │ -p cilium-111074                                                                                           │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │ 18 Oct 25 18:16 UTC │
	│ start   │ -p force-systemd-env-785999 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-785999 │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ pause   │ -p pause-321903 --alsologtostderr -v=5                                                                     │ pause-321903             │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 18:16:57
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 18:16:57.094578  181365 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:16:57.094789  181365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:16:57.094816  181365 out.go:374] Setting ErrFile to fd 2...
	I1018 18:16:57.094838  181365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:16:57.095131  181365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:16:57.095587  181365 out.go:368] Setting JSON to false
	I1018 18:16:57.096522  181365 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7166,"bootTime":1760804251,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 18:16:57.096611  181365 start.go:141] virtualization:  
	I1018 18:16:57.100274  181365 out.go:179] * [force-systemd-env-785999] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 18:16:57.103315  181365 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 18:16:57.103379  181365 notify.go:220] Checking for updates...
	I1018 18:16:57.109356  181365 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 18:16:57.112443  181365 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:16:57.115845  181365 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 18:16:57.118661  181365 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 18:16:57.121542  181365 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1018 18:16:57.125010  181365 config.go:182] Loaded profile config "pause-321903": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:16:57.125107  181365 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 18:16:57.170806  181365 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 18:16:57.170953  181365 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:16:57.321257  181365 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 18:16:57.301212974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:16:57.321364  181365 docker.go:318] overlay module found
	I1018 18:16:57.324614  181365 out.go:179] * Using the docker driver based on user configuration
	I1018 18:16:57.327433  181365 start.go:305] selected driver: docker
	I1018 18:16:57.327456  181365 start.go:925] validating driver "docker" against <nil>
	I1018 18:16:57.327471  181365 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 18:16:57.328203  181365 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:16:57.427258  181365 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 18:16:57.416253319 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:16:57.427406  181365 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 18:16:57.427617  181365 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 18:16:57.430632  181365 out.go:179] * Using Docker driver with root privileges
	I1018 18:16:57.433389  181365 cni.go:84] Creating CNI manager for ""
	I1018 18:16:57.433452  181365 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:16:57.433461  181365 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 18:16:57.433544  181365 start.go:349] cluster config:
	{Name:force-systemd-env-785999 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-785999 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:16:57.436446  181365 out.go:179] * Starting "force-systemd-env-785999" primary control-plane node in "force-systemd-env-785999" cluster
	I1018 18:16:57.439207  181365 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 18:16:57.442103  181365 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 18:16:57.444900  181365 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:16:57.444969  181365 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 18:16:57.444980  181365 cache.go:58] Caching tarball of preloaded images
	I1018 18:16:57.445016  181365 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 18:16:57.445075  181365 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 18:16:57.445084  181365 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 18:16:57.445190  181365 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/config.json ...
	I1018 18:16:57.445207  181365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/config.json: {Name:mk41c0342913787b6b41bfce0198abad5f0c466c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:16:57.475310  181365 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 18:16:57.475331  181365 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 18:16:57.475348  181365 cache.go:232] Successfully downloaded all kic artifacts
	I1018 18:16:57.475372  181365 start.go:360] acquireMachinesLock for force-systemd-env-785999: {Name:mk25aac2754eb303f882f8cecf9a7d47e61b7a50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:16:57.475467  181365 start.go:364] duration metric: took 79.033µs to acquireMachinesLock for "force-systemd-env-785999"
	I1018 18:16:57.475490  181365 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-785999 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-785999 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 18:16:57.475549  181365 start.go:125] createHost starting for "" (driver="docker")
	I1018 18:16:54.173916  179367 addons.go:514] duration metric: took 6.985951ms for enable addons: enabled=[]
	I1018 18:16:54.174048  179367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:16:54.512279  179367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:16:54.537863  179367 node_ready.go:35] waiting up to 6m0s for node "pause-321903" to be "Ready" ...
	I1018 18:16:57.478908  181365 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 18:16:57.479134  181365 start.go:159] libmachine.API.Create for "force-systemd-env-785999" (driver="docker")
	I1018 18:16:57.479168  181365 client.go:168] LocalClient.Create starting
	I1018 18:16:57.479260  181365 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem
	I1018 18:16:57.479307  181365 main.go:141] libmachine: Decoding PEM data...
	I1018 18:16:57.479325  181365 main.go:141] libmachine: Parsing certificate...
	I1018 18:16:57.479379  181365 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem
	I1018 18:16:57.479408  181365 main.go:141] libmachine: Decoding PEM data...
	I1018 18:16:57.479422  181365 main.go:141] libmachine: Parsing certificate...
	I1018 18:16:57.479778  181365 cli_runner.go:164] Run: docker network inspect force-systemd-env-785999 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 18:16:57.502394  181365 cli_runner.go:211] docker network inspect force-systemd-env-785999 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 18:16:57.502474  181365 network_create.go:284] running [docker network inspect force-systemd-env-785999] to gather additional debugging logs...
	I1018 18:16:57.502490  181365 cli_runner.go:164] Run: docker network inspect force-systemd-env-785999
	W1018 18:16:57.526496  181365 cli_runner.go:211] docker network inspect force-systemd-env-785999 returned with exit code 1
	I1018 18:16:57.526526  181365 network_create.go:287] error running [docker network inspect force-systemd-env-785999]: docker network inspect force-systemd-env-785999: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-785999 not found
	I1018 18:16:57.526540  181365 network_create.go:289] output of [docker network inspect force-systemd-env-785999]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-785999 not found
	
	** /stderr **
	I1018 18:16:57.526633  181365 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:16:57.566933  181365 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-903568cdf824 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:7a:80:c0:8c:ed} reservation:<nil>}
	I1018 18:16:57.567239  181365 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ee9fcaab9ca8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:a7:65:1b:c0:41} reservation:<nil>}
	I1018 18:16:57.567514  181365 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-414fc11e154b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:86:f0:a8:1a:86:00} reservation:<nil>}
	I1018 18:16:57.567911  181365 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b7250}
	I1018 18:16:57.567936  181365 network_create.go:124] attempt to create docker network force-systemd-env-785999 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1018 18:16:57.567993  181365 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-785999 force-systemd-env-785999
	I1018 18:16:57.684061  181365 network_create.go:108] docker network force-systemd-env-785999 192.168.76.0/24 created
	I1018 18:16:57.684096  181365 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-785999" container
	I1018 18:16:57.684166  181365 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 18:16:57.705398  181365 cli_runner.go:164] Run: docker volume create force-systemd-env-785999 --label name.minikube.sigs.k8s.io=force-systemd-env-785999 --label created_by.minikube.sigs.k8s.io=true
	I1018 18:16:57.730452  181365 oci.go:103] Successfully created a docker volume force-systemd-env-785999
	I1018 18:16:57.730543  181365 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-785999-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-785999 --entrypoint /usr/bin/test -v force-systemd-env-785999:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 18:16:58.444082  181365 oci.go:107] Successfully prepared a docker volume force-systemd-env-785999
	I1018 18:16:58.444128  181365 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:16:58.444148  181365 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 18:16:58.444228  181365 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-785999:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 18:17:00.486170  179367 node_ready.go:49] node "pause-321903" is "Ready"
	I1018 18:17:00.486202  179367 node_ready.go:38] duration metric: took 5.948302498s for node "pause-321903" to be "Ready" ...
	I1018 18:17:00.486216  179367 api_server.go:52] waiting for apiserver process to appear ...
	I1018 18:17:00.486310  179367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 18:17:00.517989  179367 api_server.go:72] duration metric: took 6.351356474s to wait for apiserver process to appear ...
	I1018 18:17:00.518014  179367 api_server.go:88] waiting for apiserver healthz status ...
	I1018 18:17:00.518035  179367 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 18:17:00.590556  179367 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 18:17:00.590642  179367 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 18:17:01.018187  179367 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 18:17:01.028761  179367 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 18:17:01.028928  179367 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 18:17:01.518137  179367 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 18:17:01.528288  179367 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 18:17:01.528370  179367 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 18:17:02.019035  179367 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 18:17:02.033030  179367 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 18:17:02.034847  179367 api_server.go:141] control plane version: v1.34.1
	I1018 18:17:02.034875  179367 api_server.go:131] duration metric: took 1.516854772s to wait for apiserver health ...
	I1018 18:17:02.034885  179367 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 18:17:02.041579  179367 system_pods.go:59] 7 kube-system pods found
	I1018 18:17:02.041683  179367 system_pods.go:61] "coredns-66bc5c9577-bxt8s" [0fbf5bcd-ba89-4603-85bf-895e985ed0cb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:17:02.041739  179367 system_pods.go:61] "etcd-pause-321903" [15471f42-4862-4c5f-9c9d-855a26a92fbc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 18:17:02.041766  179367 system_pods.go:61] "kindnet-h5sxp" [5909bf09-6c59-4f6e-859c-ac6a5c0792f9] Running
	I1018 18:17:02.041793  179367 system_pods.go:61] "kube-apiserver-pause-321903" [5c4e37f9-1f24-4284-a173-200ccea50d12] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 18:17:02.041833  179367 system_pods.go:61] "kube-controller-manager-pause-321903" [ad95ed65-fc9d-4ea5-aa25-e2d7eba789ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 18:17:02.041857  179367 system_pods.go:61] "kube-proxy-6ntpd" [34df88c7-c080-4704-a7cd-012f263ce7b9] Running
	I1018 18:17:02.041877  179367 system_pods.go:61] "kube-scheduler-pause-321903" [3d6c27c5-e0d8-462b-a0e5-e0c5a4c4000f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 18:17:02.041907  179367 system_pods.go:74] duration metric: took 7.014964ms to wait for pod list to return data ...
	I1018 18:17:02.041934  179367 default_sa.go:34] waiting for default service account to be created ...
	I1018 18:17:02.046140  179367 default_sa.go:45] found service account: "default"
	I1018 18:17:02.046165  179367 default_sa.go:55] duration metric: took 4.209924ms for default service account to be created ...
	I1018 18:17:02.046176  179367 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 18:17:02.051033  179367 system_pods.go:86] 7 kube-system pods found
	I1018 18:17:02.051121  179367 system_pods.go:89] "coredns-66bc5c9577-bxt8s" [0fbf5bcd-ba89-4603-85bf-895e985ed0cb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:17:02.051144  179367 system_pods.go:89] "etcd-pause-321903" [15471f42-4862-4c5f-9c9d-855a26a92fbc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 18:17:02.051166  179367 system_pods.go:89] "kindnet-h5sxp" [5909bf09-6c59-4f6e-859c-ac6a5c0792f9] Running
	I1018 18:17:02.051213  179367 system_pods.go:89] "kube-apiserver-pause-321903" [5c4e37f9-1f24-4284-a173-200ccea50d12] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 18:17:02.051236  179367 system_pods.go:89] "kube-controller-manager-pause-321903" [ad95ed65-fc9d-4ea5-aa25-e2d7eba789ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 18:17:02.051272  179367 system_pods.go:89] "kube-proxy-6ntpd" [34df88c7-c080-4704-a7cd-012f263ce7b9] Running
	I1018 18:17:02.051298  179367 system_pods.go:89] "kube-scheduler-pause-321903" [3d6c27c5-e0d8-462b-a0e5-e0c5a4c4000f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 18:17:02.051321  179367 system_pods.go:126] duration metric: took 5.138371ms to wait for k8s-apps to be running ...
	I1018 18:17:02.051359  179367 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 18:17:02.051478  179367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:17:02.067940  179367 system_svc.go:56] duration metric: took 16.574068ms WaitForService to wait for kubelet
	I1018 18:17:02.068030  179367 kubeadm.go:586] duration metric: took 7.901401064s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:17:02.068081  179367 node_conditions.go:102] verifying NodePressure condition ...
	I1018 18:17:02.072118  179367 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 18:17:02.072158  179367 node_conditions.go:123] node cpu capacity is 2
	I1018 18:17:02.072173  179367 node_conditions.go:105] duration metric: took 4.048535ms to run NodePressure ...
	I1018 18:17:02.072188  179367 start.go:241] waiting for startup goroutines ...
	I1018 18:17:02.072196  179367 start.go:246] waiting for cluster config update ...
	I1018 18:17:02.072208  179367 start.go:255] writing updated cluster config ...
	I1018 18:17:02.074668  179367 ssh_runner.go:195] Run: rm -f paused
	I1018 18:17:02.079501  179367 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:17:02.080091  179367 kapi.go:59] client config for pause-321903: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/pause-321903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/pause-321903/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 18:17:02.084821  179367 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bxt8s" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:17:03.239126  181365 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-785999:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.794833877s)
	I1018 18:17:03.239155  181365 kic.go:203] duration metric: took 4.795005129s to extract preloaded images to volume ...
	W1018 18:17:03.239302  181365 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 18:17:03.239419  181365 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 18:17:03.344721  181365 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-785999 --name force-systemd-env-785999 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-785999 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-785999 --network force-systemd-env-785999 --ip 192.168.76.2 --volume force-systemd-env-785999:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 18:17:03.778404  181365 cli_runner.go:164] Run: docker container inspect force-systemd-env-785999 --format={{.State.Running}}
	I1018 18:17:03.811083  181365 cli_runner.go:164] Run: docker container inspect force-systemd-env-785999 --format={{.State.Status}}
	I1018 18:17:03.840985  181365 cli_runner.go:164] Run: docker exec force-systemd-env-785999 stat /var/lib/dpkg/alternatives/iptables
	I1018 18:17:03.912218  181365 oci.go:144] the created container "force-systemd-env-785999" has a running status.
	I1018 18:17:03.912247  181365 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/force-systemd-env-785999/id_rsa...
	I1018 18:17:05.233972  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/force-systemd-env-785999/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1018 18:17:05.234027  181365 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-2509/.minikube/machines/force-systemd-env-785999/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 18:17:05.257304  181365 cli_runner.go:164] Run: docker container inspect force-systemd-env-785999 --format={{.State.Status}}
	I1018 18:17:05.278586  181365 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 18:17:05.278611  181365 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-785999 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 18:17:05.326569  181365 cli_runner.go:164] Run: docker container inspect force-systemd-env-785999 --format={{.State.Status}}
	I1018 18:17:05.346146  181365 machine.go:93] provisionDockerMachine start ...
	I1018 18:17:05.346250  181365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-785999
	I1018 18:17:05.365040  181365 main.go:141] libmachine: Using SSH client type: native
	I1018 18:17:05.365427  181365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1018 18:17:05.365442  181365 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 18:17:05.366134  181365 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1018 18:17:04.102714  179367 pod_ready.go:104] pod "coredns-66bc5c9577-bxt8s" is not "Ready", error: <nil>
	I1018 18:17:05.091857  179367 pod_ready.go:94] pod "coredns-66bc5c9577-bxt8s" is "Ready"
	I1018 18:17:05.091891  179367 pod_ready.go:86] duration metric: took 3.007038204s for pod "coredns-66bc5c9577-bxt8s" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:17:05.095742  179367 pod_ready.go:83] waiting for pod "etcd-pause-321903" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:17:05.102362  179367 pod_ready.go:94] pod "etcd-pause-321903" is "Ready"
	I1018 18:17:05.102392  179367 pod_ready.go:86] duration metric: took 6.619489ms for pod "etcd-pause-321903" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:17:05.105617  179367 pod_ready.go:83] waiting for pod "kube-apiserver-pause-321903" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:17:06.111993  179367 pod_ready.go:94] pod "kube-apiserver-pause-321903" is "Ready"
	I1018 18:17:06.112027  179367 pod_ready.go:86] duration metric: took 1.006380913s for pod "kube-apiserver-pause-321903" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:17:06.115268  179367 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-321903" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 18:17:08.121012  179367 pod_ready.go:104] pod "kube-controller-manager-pause-321903" is not "Ready", error: <nil>
	I1018 18:17:10.122429  179367 pod_ready.go:94] pod "kube-controller-manager-pause-321903" is "Ready"
	I1018 18:17:10.122453  179367 pod_ready.go:86] duration metric: took 4.007147706s for pod "kube-controller-manager-pause-321903" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:17:10.125468  179367 pod_ready.go:83] waiting for pod "kube-proxy-6ntpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:17:10.130891  179367 pod_ready.go:94] pod "kube-proxy-6ntpd" is "Ready"
	I1018 18:17:10.130913  179367 pod_ready.go:86] duration metric: took 5.417586ms for pod "kube-proxy-6ntpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:17:10.134074  179367 pod_ready.go:83] waiting for pod "kube-scheduler-pause-321903" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:17:10.888472  179367 pod_ready.go:94] pod "kube-scheduler-pause-321903" is "Ready"
	I1018 18:17:10.888494  179367 pod_ready.go:86] duration metric: took 754.384763ms for pod "kube-scheduler-pause-321903" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:17:10.888506  179367 pod_ready.go:40] duration metric: took 8.808969494s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:17:10.976656  179367 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 18:17:10.980010  179367 out.go:179] * Done! kubectl is now configured to use "pause-321903" cluster and "default" namespace by default
	I1018 18:17:08.520917  181365 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-785999
	
	I1018 18:17:08.520966  181365 ubuntu.go:182] provisioning hostname "force-systemd-env-785999"
	I1018 18:17:08.521031  181365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-785999
	I1018 18:17:08.539300  181365 main.go:141] libmachine: Using SSH client type: native
	I1018 18:17:08.539628  181365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1018 18:17:08.539646  181365 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-785999 && echo "force-systemd-env-785999" | sudo tee /etc/hostname
	I1018 18:17:08.699364  181365 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-785999
	
	I1018 18:17:08.699463  181365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-785999
	I1018 18:17:08.717159  181365 main.go:141] libmachine: Using SSH client type: native
	I1018 18:17:08.717471  181365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1018 18:17:08.717494  181365 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-785999' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-785999/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-785999' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 18:17:08.873126  181365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 18:17:08.873151  181365 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 18:17:08.873170  181365 ubuntu.go:190] setting up certificates
	I1018 18:17:08.873179  181365 provision.go:84] configureAuth start
	I1018 18:17:08.873254  181365 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-785999
	I1018 18:17:08.895849  181365 provision.go:143] copyHostCerts
	I1018 18:17:08.895887  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 18:17:08.895919  181365 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 18:17:08.895926  181365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 18:17:08.896007  181365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 18:17:08.896091  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 18:17:08.896107  181365 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 18:17:08.896111  181365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 18:17:08.896136  181365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 18:17:08.896204  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 18:17:08.896221  181365 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 18:17:08.896225  181365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 18:17:08.896248  181365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 18:17:08.896300  181365 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-785999 san=[127.0.0.1 192.168.76.2 force-systemd-env-785999 localhost minikube]
	I1018 18:17:09.372234  181365 provision.go:177] copyRemoteCerts
	I1018 18:17:09.372311  181365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 18:17:09.372355  181365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-785999
	I1018 18:17:09.389213  181365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/force-systemd-env-785999/id_rsa Username:docker}
	I1018 18:17:09.492669  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 18:17:09.492742  181365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 18:17:09.510628  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 18:17:09.510688  181365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1018 18:17:09.528645  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 18:17:09.528760  181365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1018 18:17:09.550175  181365 provision.go:87] duration metric: took 676.973331ms to configureAuth
	I1018 18:17:09.550202  181365 ubuntu.go:206] setting minikube options for container-runtime
	I1018 18:17:09.550412  181365 config.go:182] Loaded profile config "force-systemd-env-785999": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:17:09.550526  181365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-785999
	I1018 18:17:09.570581  181365 main.go:141] libmachine: Using SSH client type: native
	I1018 18:17:09.570889  181365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1018 18:17:09.570909  181365 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 18:17:09.851021  181365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 18:17:09.851044  181365 machine.go:96] duration metric: took 4.504870843s to provisionDockerMachine
	I1018 18:17:09.851064  181365 client.go:171] duration metric: took 12.371886762s to LocalClient.Create
	I1018 18:17:09.851078  181365 start.go:167] duration metric: took 12.371944207s to libmachine.API.Create "force-systemd-env-785999"
	I1018 18:17:09.851088  181365 start.go:293] postStartSetup for "force-systemd-env-785999" (driver="docker")
	I1018 18:17:09.851098  181365 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 18:17:09.851170  181365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 18:17:09.851220  181365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-785999
	I1018 18:17:09.885086  181365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/force-systemd-env-785999/id_rsa Username:docker}
	I1018 18:17:10.014745  181365 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 18:17:10.019072  181365 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 18:17:10.019105  181365 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 18:17:10.019118  181365 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 18:17:10.019183  181365 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 18:17:10.019279  181365 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 18:17:10.019290  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 18:17:10.019399  181365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 18:17:10.027893  181365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:17:10.048403  181365 start.go:296] duration metric: took 197.300233ms for postStartSetup
	I1018 18:17:10.048809  181365 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-785999
	I1018 18:17:10.067029  181365 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/config.json ...
	I1018 18:17:10.067313  181365 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 18:17:10.067361  181365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-785999
	I1018 18:17:10.084577  181365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/force-systemd-env-785999/id_rsa Username:docker}
	I1018 18:17:10.190253  181365 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 18:17:10.194762  181365 start.go:128] duration metric: took 12.719198688s to createHost
	I1018 18:17:10.194794  181365 start.go:83] releasing machines lock for "force-systemd-env-785999", held for 12.719319674s
	I1018 18:17:10.194862  181365 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-785999
	I1018 18:17:10.211228  181365 ssh_runner.go:195] Run: cat /version.json
	I1018 18:17:10.211290  181365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-785999
	I1018 18:17:10.211543  181365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 18:17:10.211601  181365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-785999
	I1018 18:17:10.229697  181365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/force-systemd-env-785999/id_rsa Username:docker}
	I1018 18:17:10.230994  181365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/force-systemd-env-785999/id_rsa Username:docker}
	I1018 18:17:10.418318  181365 ssh_runner.go:195] Run: systemctl --version
	I1018 18:17:10.424562  181365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 18:17:10.460231  181365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 18:17:10.465450  181365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 18:17:10.465519  181365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 18:17:10.495422  181365 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 18:17:10.495499  181365 start.go:495] detecting cgroup driver to use...
	I1018 18:17:10.495529  181365 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1018 18:17:10.495606  181365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 18:17:10.527903  181365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 18:17:10.547564  181365 docker.go:218] disabling cri-docker service (if available) ...
	I1018 18:17:10.547628  181365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 18:17:10.566319  181365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 18:17:10.583413  181365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 18:17:10.708899  181365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 18:17:10.846176  181365 docker.go:234] disabling docker service ...
	I1018 18:17:10.846288  181365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 18:17:10.868618  181365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 18:17:10.883027  181365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 18:17:11.085049  181365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 18:17:11.238338  181365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 18:17:11.256095  181365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 18:17:11.277049  181365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 18:17:11.277148  181365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:17:11.291011  181365 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 18:17:11.291084  181365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:17:11.302995  181365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:17:11.315447  181365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:17:11.324007  181365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 18:17:11.333382  181365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:17:11.342240  181365 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:17:11.356055  181365 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:17:11.364816  181365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 18:17:11.373576  181365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 18:17:11.381941  181365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:17:11.549027  181365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 18:17:11.693546  181365 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 18:17:11.693667  181365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 18:17:11.705533  181365 start.go:563] Will wait 60s for crictl version
	I1018 18:17:11.705597  181365 ssh_runner.go:195] Run: which crictl
	I1018 18:17:11.709433  181365 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 18:17:11.739885  181365 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 18:17:11.740037  181365 ssh_runner.go:195] Run: crio --version
	I1018 18:17:11.776659  181365 ssh_runner.go:195] Run: crio --version
	I1018 18:17:11.810175  181365 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 18:17:11.813095  181365 cli_runner.go:164] Run: docker network inspect force-systemd-env-785999 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:17:11.830239  181365 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 18:17:11.834208  181365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:17:11.843823  181365 kubeadm.go:883] updating cluster {Name:force-systemd-env-785999 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-785999 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 18:17:11.843933  181365 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:17:11.843992  181365 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:17:11.879125  181365 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:17:11.879148  181365 crio.go:433] Images already preloaded, skipping extraction
	I1018 18:17:11.879204  181365 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:17:11.906290  181365 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:17:11.906311  181365 cache_images.go:85] Images are preloaded, skipping loading
	I1018 18:17:11.906319  181365 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 18:17:11.906449  181365 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-785999 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-785999 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 18:17:11.906532  181365 ssh_runner.go:195] Run: crio config
	I1018 18:17:11.985313  181365 cni.go:84] Creating CNI manager for ""
	I1018 18:17:11.985378  181365 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:17:11.985412  181365 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 18:17:11.985464  181365 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-785999 NodeName:force-systemd-env-785999 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 18:17:11.985615  181365 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-785999"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 18:17:11.985701  181365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 18:17:11.994576  181365 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 18:17:11.994686  181365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 18:17:12.003724  181365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1018 18:17:12.026944  181365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 18:17:12.047261  181365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1018 18:17:12.075408  181365 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 18:17:12.079624  181365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:17:12.090410  181365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	
	
	==> CRI-O <==
	Oct 18 18:16:52 pause-321903 crio[2063]: time="2025-10-18T18:16:52.852247049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:16:52 pause-321903 crio[2063]: time="2025-10-18T18:16:52.852563008Z" level=info msg="Started container" PID=2278 containerID=e27366c82d5cb638c304a969a81298d5df85aada0411f1d79cdf701c215ca024 description=kube-system/etcd-pause-321903/etcd id=7c672b8e-031d-4429-bba4-f5d138a8a6bb name=/runtime.v1.RuntimeService/StartContainer sandboxID=aed3c896387540511a7e2e2e6f63ecba67ac0b50c030316f5c23bc6acec84ce0
	Oct 18 18:16:52 pause-321903 crio[2063]: time="2025-10-18T18:16:52.881980528Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:16:52 pause-321903 crio[2063]: time="2025-10-18T18:16:52.918554208Z" level=info msg="Started container" PID=2277 containerID=657241d3f85a70a79a49eceb02219d308003745e67fb44fd088e3e9c4b8e4772 description=kube-system/kube-proxy-6ntpd/kube-proxy id=07c2a9a7-091d-4f67-890d-f78deea29941 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d835638d52db0ee13b9e43edcfd04fa446327618e1591db419ac965337592d97
	Oct 18 18:16:52 pause-321903 crio[2063]: time="2025-10-18T18:16:52.919064598Z" level=info msg="Created container b1f5de574a87e23c6882f0de27620ef3b5dcc7d2dd2cca428ed032ed283b7f17: kube-system/kube-controller-manager-pause-321903/kube-controller-manager" id=9d302fb0-db83-4ad4-99a7-788889128d30 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:16:52 pause-321903 crio[2063]: time="2025-10-18T18:16:52.944421013Z" level=info msg="Starting container: b1f5de574a87e23c6882f0de27620ef3b5dcc7d2dd2cca428ed032ed283b7f17" id=c76cbdf4-6151-4eef-ad0e-953a037e7443 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:16:52 pause-321903 crio[2063]: time="2025-10-18T18:16:52.949624492Z" level=info msg="Started container" PID=2294 containerID=b1f5de574a87e23c6882f0de27620ef3b5dcc7d2dd2cca428ed032ed283b7f17 description=kube-system/kube-controller-manager-pause-321903/kube-controller-manager id=c76cbdf4-6151-4eef-ad0e-953a037e7443 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8a3ffc30a98593f74d945573b23260d7e3f58bfd0d2070faeb5a0d0582f31de4
	Oct 18 18:16:53 pause-321903 crio[2063]: time="2025-10-18T18:16:53.024470483Z" level=info msg="Created container 509dbb4e9ee8a1d111116d82883b8bcaf502fb1be8d2a0478c3a9c6a300aa9c9: kube-system/kube-apiserver-pause-321903/kube-apiserver" id=bf009dcc-fa76-4f8c-b232-caa3b492048e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:16:53 pause-321903 crio[2063]: time="2025-10-18T18:16:53.033295382Z" level=info msg="Starting container: 509dbb4e9ee8a1d111116d82883b8bcaf502fb1be8d2a0478c3a9c6a300aa9c9" id=ff21e651-efad-4423-bb58-135cd96072a7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:16:53 pause-321903 crio[2063]: time="2025-10-18T18:16:53.035541392Z" level=info msg="Started container" PID=2320 containerID=509dbb4e9ee8a1d111116d82883b8bcaf502fb1be8d2a0478c3a9c6a300aa9c9 description=kube-system/kube-apiserver-pause-321903/kube-apiserver id=ff21e651-efad-4423-bb58-135cd96072a7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9c94bbddf8ea0e16c2810e8d588df09313708a62714287940ec81408e87a12ea
	Oct 18 18:16:53 pause-321903 crio[2063]: time="2025-10-18T18:16:53.057362074Z" level=info msg="Created container 1230f9e5cf3e9675dbf837caca1f6efe823b10055878ee033b6d97fdec3c4a73: kube-system/kube-scheduler-pause-321903/kube-scheduler" id=c3eba72b-862f-4e98-bbc6-a8e10e051237 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:16:53 pause-321903 crio[2063]: time="2025-10-18T18:16:53.058004272Z" level=info msg="Starting container: 1230f9e5cf3e9675dbf837caca1f6efe823b10055878ee033b6d97fdec3c4a73" id=ddd5283b-6b6f-4387-b11f-5c0a54bbec94 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:16:53 pause-321903 crio[2063]: time="2025-10-18T18:16:53.059928219Z" level=info msg="Started container" PID=2338 containerID=1230f9e5cf3e9675dbf837caca1f6efe823b10055878ee033b6d97fdec3c4a73 description=kube-system/kube-scheduler-pause-321903/kube-scheduler id=ddd5283b-6b6f-4387-b11f-5c0a54bbec94 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9197cda6c5ed473b673a65209627eec1c79efb169d6a41ea11d5889c90099a7e
	Oct 18 18:17:03 pause-321903 crio[2063]: time="2025-10-18T18:17:03.261548555Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:17:03 pause-321903 crio[2063]: time="2025-10-18T18:17:03.266960045Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:17:03 pause-321903 crio[2063]: time="2025-10-18T18:17:03.267004345Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:17:03 pause-321903 crio[2063]: time="2025-10-18T18:17:03.267029371Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:17:03 pause-321903 crio[2063]: time="2025-10-18T18:17:03.279535161Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:17:03 pause-321903 crio[2063]: time="2025-10-18T18:17:03.279577418Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:17:03 pause-321903 crio[2063]: time="2025-10-18T18:17:03.279600811Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:17:03 pause-321903 crio[2063]: time="2025-10-18T18:17:03.289495287Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:17:03 pause-321903 crio[2063]: time="2025-10-18T18:17:03.289533975Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:17:03 pause-321903 crio[2063]: time="2025-10-18T18:17:03.289557048Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:17:03 pause-321903 crio[2063]: time="2025-10-18T18:17:03.295793541Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:17:03 pause-321903 crio[2063]: time="2025-10-18T18:17:03.295998475Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	1230f9e5cf3e9       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   21 seconds ago       Running             kube-scheduler            1                   9197cda6c5ed4       kube-scheduler-pause-321903            kube-system
	509dbb4e9ee8a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   21 seconds ago       Running             kube-apiserver            1                   9c94bbddf8ea0       kube-apiserver-pause-321903            kube-system
	b1f5de574a87e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   21 seconds ago       Running             kube-controller-manager   1                   8a3ffc30a9859       kube-controller-manager-pause-321903   kube-system
	0a87401b26f8f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   21 seconds ago       Running             coredns                   1                   20e0edde1fed0       coredns-66bc5c9577-bxt8s               kube-system
	4afe595d4901a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   21 seconds ago       Running             kindnet-cni               1                   6db6d133cccd2       kindnet-h5sxp                          kube-system
	e27366c82d5cb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   21 seconds ago       Running             etcd                      1                   aed3c89638754       etcd-pause-321903                      kube-system
	657241d3f85a7       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   21 seconds ago       Running             kube-proxy                1                   d835638d52db0       kube-proxy-6ntpd                       kube-system
	b0ab4ec6d0c28       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   44 seconds ago       Exited              coredns                   0                   20e0edde1fed0       coredns-66bc5c9577-bxt8s               kube-system
	d8d01459be672       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   6db6d133cccd2       kindnet-h5sxp                          kube-system
	fb3fca7cd1009       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   d835638d52db0       kube-proxy-6ntpd                       kube-system
	2438a9ff996c0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   9c94bbddf8ea0       kube-apiserver-pause-321903            kube-system
	fe01b2bdff4a1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   8a3ffc30a9859       kube-controller-manager-pause-321903   kube-system
	9b891540c5dec       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   9197cda6c5ed4       kube-scheduler-pause-321903            kube-system
	e1cc985af8447       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   aed3c89638754       etcd-pause-321903                      kube-system
	
	
	==> coredns [0a87401b26f8fe5eca4265d3f61980cf50be35c9ad6297578a3c1117545e88e9] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39864 - 25223 "HINFO IN 7704298137972469783.8140309687720505532. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020442227s
	
	
	==> coredns [b0ab4ec6d0c28b9d0f51329318dcec692b7c5e3207c1daea9fe798392dcf0b44] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54196 - 12069 "HINFO IN 4332264017597495645.7173261532093308133. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.042550666s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-321903
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-321903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=pause-321903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T18_15_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 18:15:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-321903
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 18:17:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 18:16:29 +0000   Sat, 18 Oct 2025 18:15:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 18:16:29 +0000   Sat, 18 Oct 2025 18:15:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 18:16:29 +0000   Sat, 18 Oct 2025 18:15:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 18:16:29 +0000   Sat, 18 Oct 2025 18:16:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-321903
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                99a5aadd-8fb2-4a98-b85f-359bf169b051
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-bxt8s                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     87s
	  kube-system                 etcd-pause-321903                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         92s
	  kube-system                 kindnet-h5sxp                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      87s
	  kube-system                 kube-apiserver-pause-321903             250m (12%)    0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-controller-manager-pause-321903    200m (10%)    0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-proxy-6ntpd                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-scheduler-pause-321903             100m (5%)     0 (0%)      0 (0%)           0 (0%)         93s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 85s   kube-proxy       
	  Normal   Starting                 13s   kube-proxy       
	  Normal   Starting                 92s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 92s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  92s   kubelet          Node pause-321903 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    92s   kubelet          Node pause-321903 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     92s   kubelet          Node pause-321903 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           88s   node-controller  Node pause-321903 event: Registered Node pause-321903 in Controller
	  Normal   NodeReady                45s   kubelet          Node pause-321903 status is now: NodeReady
	  Normal   RegisteredNode           11s   node-controller  Node pause-321903 event: Registered Node pause-321903 in Controller
	
	
	==> dmesg <==
	[Oct18 17:48] overlayfs: idmapped layers are currently not supported
	[  +2.594489] overlayfs: idmapped layers are currently not supported
	[Oct18 17:50] overlayfs: idmapped layers are currently not supported
	[ +42.240353] overlayfs: idmapped layers are currently not supported
	[Oct18 17:51] overlayfs: idmapped layers are currently not supported
	[Oct18 17:53] overlayfs: idmapped layers are currently not supported
	[Oct18 17:58] overlayfs: idmapped layers are currently not supported
	[ +33.320958] overlayfs: idmapped layers are currently not supported
	[Oct18 18:00] overlayfs: idmapped layers are currently not supported
	[Oct18 18:01] overlayfs: idmapped layers are currently not supported
	[Oct18 18:02] overlayfs: idmapped layers are currently not supported
	[Oct18 18:04] overlayfs: idmapped layers are currently not supported
	[ +24.403909] overlayfs: idmapped layers are currently not supported
	[  +6.162774] overlayfs: idmapped layers are currently not supported
	[Oct18 18:05] overlayfs: idmapped layers are currently not supported
	[ +25.128760] overlayfs: idmapped layers are currently not supported
	[Oct18 18:06] overlayfs: idmapped layers are currently not supported
	[Oct18 18:07] overlayfs: idmapped layers are currently not supported
	[Oct18 18:08] overlayfs: idmapped layers are currently not supported
	[Oct18 18:09] overlayfs: idmapped layers are currently not supported
	[Oct18 18:11] overlayfs: idmapped layers are currently not supported
	[Oct18 18:13] overlayfs: idmapped layers are currently not supported
	[ +30.969240] overlayfs: idmapped layers are currently not supported
	[Oct18 18:15] overlayfs: idmapped layers are currently not supported
	[Oct18 18:16] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e1cc985af8447c811346ce79c30a2b8fa589396bae84a02969101095e0145ae8] <==
	{"level":"warn","ts":"2025-10-18T18:15:38.669175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:15:38.685966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:15:38.707037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:15:38.727125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:15:38.741517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:15:38.756606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:15:38.824472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39278","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T18:16:44.410900Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T18:16:44.410951Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-321903","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-18T18:16:44.411032Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T18:16:44.411086Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T18:16:44.552424Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T18:16:44.552511Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-10-18T18:16:44.552622Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-18T18:16:44.552640Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-18T18:16:44.552985Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T18:16:44.553016Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T18:16:44.553026Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T18:16:44.552892Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T18:16:44.553046Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T18:16:44.553053Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T18:16:44.555916Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-18T18:16:44.556003Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T18:16:44.556043Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-18T18:16:44.556051Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-321903","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [e27366c82d5cb638c304a969a81298d5df85aada0411f1d79cdf701c215ca024] <==
	{"level":"warn","ts":"2025-10-18T18:16:56.980409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.046858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.123368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.169458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.253114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.322881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.378407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.442777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.493399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.551043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.609898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.650366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.686403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.739807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.768329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.841095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.876323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.941961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.985491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:58.023653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:58.122457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:58.188730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:58.224539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:58.227689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:58.355927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50712","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:17:14 up  1:59,  0 user,  load average: 2.90, 3.14, 2.60
	Linux pause-321903 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4afe595d4901a5c13fe23c2f892b2c0e58181ac48b98ee859ee099da4d4a1607] <==
	I1018 18:16:53.017829       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 18:16:53.019399       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 18:16:53.019522       1 main.go:148] setting mtu 1500 for CNI 
	I1018 18:16:53.019580       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 18:16:53.019920       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T18:16:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 18:16:53.273290       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 18:16:53.273324       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 18:16:53.273336       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 18:16:53.274291       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 18:17:00.577096       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 18:17:00.577273       1 metrics.go:72] Registering metrics
	I1018 18:17:00.577379       1 controller.go:711] "Syncing nftables rules"
	I1018 18:17:03.261159       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 18:17:03.261216       1 main.go:301] handling current node
	I1018 18:17:13.260990       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 18:17:13.261047       1 main.go:301] handling current node
	
	
	==> kindnet [d8d01459be672ec4fd8b85084bd40212b1672ef1c3a3885c3617da35e9c4fb8b] <==
	I1018 18:15:48.620030       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 18:15:48.620453       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 18:15:48.620636       1 main.go:148] setting mtu 1500 for CNI 
	I1018 18:15:48.620681       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 18:15:48.620726       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T18:15:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 18:15:48.814934       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 18:15:48.814966       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 18:15:48.814976       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 18:15:48.815282       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 18:16:18.815345       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 18:16:18.815421       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 18:16:18.815542       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 18:16:18.815653       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 18:16:20.415980       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 18:16:20.416054       1 metrics.go:72] Registering metrics
	I1018 18:16:20.416132       1 controller.go:711] "Syncing nftables rules"
	I1018 18:16:28.817003       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 18:16:28.817164       1 main.go:301] handling current node
	I1018 18:16:38.821039       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 18:16:38.821143       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2438a9ff996c0d245fc840f1c24567b79350ac528f1466e447512ce99687f671] <==
	0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.431913       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.431953       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.431991       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432031       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432073       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432114       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432154       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432198       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432237       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432281       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432325       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432366       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432407       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432445       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432484       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432524       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432565       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432603       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432645       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432697       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.441502       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.441583       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.441660       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.447089       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [509dbb4e9ee8a1d111116d82883b8bcaf502fb1be8d2a0478c3a9c6a300aa9c9] <==
	I1018 18:17:00.296161       1 aggregator.go:171] initial CRD sync complete...
	I1018 18:17:00.297915       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 18:17:00.297956       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 18:17:00.297145       1 policy_source.go:240] refreshing policies
	I1018 18:17:00.307546       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 18:17:00.321036       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 18:17:00.317563       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 18:17:00.447105       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 18:17:00.449002       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 18:17:00.449270       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 18:17:00.449554       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 18:17:00.449682       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 18:17:00.457702       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 18:17:00.321188       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 18:17:00.508321       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 18:17:00.435936       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 18:17:00.442421       1 cache.go:39] Caches are synced for autoregister controller
	I1018 18:17:00.570352       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1018 18:17:00.581626       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 18:17:00.655951       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 18:17:02.166564       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 18:17:03.706734       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 18:17:03.822496       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 18:17:03.911829       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 18:17:03.960605       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [b1f5de574a87e23c6882f0de27620ef3b5dcc7d2dd2cca428ed032ed283b7f17] <==
	I1018 18:17:03.657586       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 18:17:03.658844       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 18:17:03.664496       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 18:17:03.667719       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 18:17:03.667892       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 18:17:03.675306       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 18:17:03.679146       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 18:17:03.679243       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 18:17:03.681141       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 18:17:03.684273       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 18:17:03.684612       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 18:17:03.693109       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 18:17:03.694906       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 18:17:03.695028       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 18:17:03.695121       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-321903"
	I1018 18:17:03.695171       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 18:17:03.699167       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 18:17:03.700317       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 18:17:03.706970       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 18:17:03.707150       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 18:17:03.711335       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 18:17:03.717107       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 18:17:03.717319       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 18:17:03.721405       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 18:17:03.722953       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-controller-manager [fe01b2bdff4a1ff4347ebb1876a7fe94ea12cf41798722170b21b95c0ea7477c] <==
	I1018 18:15:46.623422       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 18:15:46.623807       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 18:15:46.623947       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 18:15:46.624166       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 18:15:46.632258       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-321903" podCIDRs=["10.244.0.0/24"]
	I1018 18:15:46.632395       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 18:15:46.635021       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 18:15:46.635484       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 18:15:46.635528       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 18:15:46.635555       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 18:15:46.635644       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 18:15:46.642235       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 18:15:46.642309       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 18:15:46.657740       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 18:15:46.666927       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 18:15:46.666957       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 18:15:46.666965       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 18:15:46.667034       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 18:15:46.667118       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 18:15:46.667229       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-321903"
	I1018 18:15:46.667284       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 18:15:46.667560       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 18:15:46.693490       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 18:15:46.732824       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 18:16:31.675406       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [657241d3f85a70a79a49eceb02219d308003745e67fb44fd088e3e9c4b8e4772] <==
	I1018 18:16:56.237919       1 server_linux.go:53] "Using iptables proxy"
	I1018 18:16:58.767811       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 18:17:00.677038       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 18:17:00.677185       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 18:17:00.677341       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 18:17:01.313876       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 18:17:01.313946       1 server_linux.go:132] "Using iptables Proxier"
	I1018 18:17:01.476512       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 18:17:01.477114       1 server.go:527] "Version info" version="v1.34.1"
	I1018 18:17:01.477370       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:17:01.478894       1 config.go:200] "Starting service config controller"
	I1018 18:17:01.479017       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 18:17:01.479068       1 config.go:106] "Starting endpoint slice config controller"
	I1018 18:17:01.479130       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 18:17:01.479223       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 18:17:01.479254       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 18:17:01.612984       1 config.go:309] "Starting node config controller"
	I1018 18:17:01.621382       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 18:17:01.686574       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 18:17:01.727351       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 18:17:01.784769       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 18:17:01.786067       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [fb3fca7cd1009ed922f3f234148284e89a8be61f5965386518f93c0ab5ecbb2d] <==
	I1018 18:15:48.445060       1 server_linux.go:53] "Using iptables proxy"
	I1018 18:15:48.642653       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 18:15:48.743757       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 18:15:48.743873       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 18:15:48.743984       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 18:15:48.761982       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 18:15:48.762036       1 server_linux.go:132] "Using iptables Proxier"
	I1018 18:15:48.766145       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 18:15:48.766458       1 server.go:527] "Version info" version="v1.34.1"
	I1018 18:15:48.766482       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:15:48.769873       1 config.go:106] "Starting endpoint slice config controller"
	I1018 18:15:48.769949       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 18:15:48.770280       1 config.go:200] "Starting service config controller"
	I1018 18:15:48.770362       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 18:15:48.772191       1 config.go:309] "Starting node config controller"
	I1018 18:15:48.772210       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 18:15:48.772218       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 18:15:48.772630       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 18:15:48.772647       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 18:15:48.870749       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 18:15:48.870762       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 18:15:48.873282       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [1230f9e5cf3e9675dbf837caca1f6efe823b10055878ee033b6d97fdec3c4a73] <==
	I1018 18:16:57.659159       1 serving.go:386] Generated self-signed cert in-memory
	I1018 18:17:02.506635       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 18:17:02.506676       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:17:02.531866       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 18:17:02.531979       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 18:17:02.532007       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 18:17:02.532045       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 18:17:02.538449       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:17:02.538488       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:17:02.538512       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:17:02.538520       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:17:02.632169       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 18:17:02.639660       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:17:02.639662       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [9b891540c5deca63a1228b886f79648205e27476e41e85a70e1a582b416d1b3f] <==
	E1018 18:15:39.641909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 18:15:39.642073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 18:15:39.642140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 18:15:39.642187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 18:15:40.450993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 18:15:40.454448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 18:15:40.506516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 18:15:40.506741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 18:15:40.526371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 18:15:40.526649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 18:15:40.559670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 18:15:40.591759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 18:15:40.614688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 18:15:40.652369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 18:15:40.749329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 18:15:40.826304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 18:15:40.909272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 18:15:40.916372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1018 18:15:43.605802       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:16:44.425948       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 18:16:44.427945       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 18:16:44.427967       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 18:16:44.428078       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:16:44.429148       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 18:16:44.429177       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.629153    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-h5sxp\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="5909bf09-6c59-4f6e-859c-ac6a5c0792f9" pod="kube-system/kindnet-h5sxp"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.629614    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6ntpd\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="34df88c7-c080-4704-a7cd-012f263ce7b9" pod="kube-system/kube-proxy-6ntpd"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.629949    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-bxt8s\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0fbf5bcd-ba89-4603-85bf-895e985ed0cb" pod="kube-system/coredns-66bc5c9577-bxt8s"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.630315    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-321903\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="676b2f286830ba93b13188fe018ad23f" pod="kube-system/kube-controller-manager-pause-321903"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.630627    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-321903\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="817ca23bfd256fb34800f207379338f6" pod="kube-system/kube-apiserver-pause-321903"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.630931    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-321903\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a3ead399c9d5e599281fbbf31ce37802" pod="kube-system/etcd-pause-321903"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.631232    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-321903\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d67c068537ef56c5aeb9ec4b71ea396f" pod="kube-system/kube-scheduler-pause-321903"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: I1018 18:16:52.634013    1307 scope.go:117] "RemoveContainer" containerID="9b891540c5deca63a1228b886f79648205e27476e41e85a70e1a582b416d1b3f"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.634406    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-bxt8s\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0fbf5bcd-ba89-4603-85bf-895e985ed0cb" pod="kube-system/coredns-66bc5c9577-bxt8s"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.634974    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-321903\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="676b2f286830ba93b13188fe018ad23f" pod="kube-system/kube-controller-manager-pause-321903"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.635286    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-321903\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="817ca23bfd256fb34800f207379338f6" pod="kube-system/kube-apiserver-pause-321903"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.635585    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-321903\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a3ead399c9d5e599281fbbf31ce37802" pod="kube-system/etcd-pause-321903"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.635893    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-321903\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d67c068537ef56c5aeb9ec4b71ea396f" pod="kube-system/kube-scheduler-pause-321903"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.636221    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-h5sxp\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="5909bf09-6c59-4f6e-859c-ac6a5c0792f9" pod="kube-system/kindnet-h5sxp"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.636522    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6ntpd\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="34df88c7-c080-4704-a7cd-012f263ce7b9" pod="kube-system/kube-proxy-6ntpd"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.875099    1307 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.168.85.2:8443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-pause-321903.186fa8a4bd0c8356  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-pause-321903,UID:d67c068537ef56c5aeb9ec4b71ea396f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:pause-321903,},FirstTimestamp:2025-10-18 18:16:44.82497007 +0000 UTC m=+62.719894291,LastTimestamp:2025-10-18 18:16:44.82497007 +0000 UTC m=+62.719894291,Count:1,Type:Warning,EventTime:0001-01-01
00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:pause-321903,}"
	Oct 18 18:16:59 pause-321903 kubelet[1307]: E1018 18:16:59.911278    1307 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-bxt8s\" is forbidden: User \"system:node:pause-321903\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-321903' and this object" podUID="0fbf5bcd-ba89-4603-85bf-895e985ed0cb" pod="kube-system/coredns-66bc5c9577-bxt8s"
	Oct 18 18:16:59 pause-321903 kubelet[1307]: E1018 18:16:59.921973    1307 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-321903\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-321903' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 18 18:16:59 pause-321903 kubelet[1307]: E1018 18:16:59.922177    1307 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-321903\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-321903' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 18 18:16:59 pause-321903 kubelet[1307]: E1018 18:16:59.922440    1307 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-321903\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-321903' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 18 18:17:00 pause-321903 kubelet[1307]: E1018 18:17:00.210438    1307 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-321903\" is forbidden: User \"system:node:pause-321903\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-321903' and this object" podUID="676b2f286830ba93b13188fe018ad23f" pod="kube-system/kube-controller-manager-pause-321903"
	Oct 18 18:17:02 pause-321903 kubelet[1307]: W1018 18:17:02.571440    1307 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 18 18:17:11 pause-321903 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 18:17:11 pause-321903 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 18:17:11 pause-321903 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-321903 -n pause-321903
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-321903 -n pause-321903: exit status 2 (499.002305ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-321903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-321903
helpers_test.go:243: (dbg) docker inspect pause-321903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b28ca1d5f7a6e3974217e890bb848cfb7a7dcd4a61a8b67db2a23bfdc31c260e",
	        "Created": "2025-10-18T18:15:13.869574866Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 173319,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T18:15:13.931176394Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/b28ca1d5f7a6e3974217e890bb848cfb7a7dcd4a61a8b67db2a23bfdc31c260e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b28ca1d5f7a6e3974217e890bb848cfb7a7dcd4a61a8b67db2a23bfdc31c260e/hostname",
	        "HostsPath": "/var/lib/docker/containers/b28ca1d5f7a6e3974217e890bb848cfb7a7dcd4a61a8b67db2a23bfdc31c260e/hosts",
	        "LogPath": "/var/lib/docker/containers/b28ca1d5f7a6e3974217e890bb848cfb7a7dcd4a61a8b67db2a23bfdc31c260e/b28ca1d5f7a6e3974217e890bb848cfb7a7dcd4a61a8b67db2a23bfdc31c260e-json.log",
	        "Name": "/pause-321903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-321903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-321903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b28ca1d5f7a6e3974217e890bb848cfb7a7dcd4a61a8b67db2a23bfdc31c260e",
	                "LowerDir": "/var/lib/docker/overlay2/f6ede9998230dc5f3d47fa3e062dd23465a972ca8e4778d15ec5d10aa3b1adc3-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f6ede9998230dc5f3d47fa3e062dd23465a972ca8e4778d15ec5d10aa3b1adc3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f6ede9998230dc5f3d47fa3e062dd23465a972ca8e4778d15ec5d10aa3b1adc3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f6ede9998230dc5f3d47fa3e062dd23465a972ca8e4778d15ec5d10aa3b1adc3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-321903",
	                "Source": "/var/lib/docker/volumes/pause-321903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-321903",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-321903",
	                "name.minikube.sigs.k8s.io": "pause-321903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "45b88b0a13e61b90e10101caeb283a4693bd9723436a3af9b522953035c38a5d",
	            "SandboxKey": "/var/run/docker/netns/45b88b0a13e6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33018"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33019"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33022"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33020"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33021"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-321903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:84:36:61:a3:92",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "784ab4e1778c53d0a27ab91d3e6988f16075fb6e81844b257b18551c5e24185c",
	                    "EndpointID": "969352ab4c0d6bc7fc1246e3d15da385c0034e9327783d662ae5173c9f1284b6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-321903",
	                        "b28ca1d5f7a6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-321903 -n pause-321903
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-321903 -n pause-321903: exit status 2 (416.282375ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-321903 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-321903 logs -n 25: (1.640898597s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-111074 sudo systemctl cat kubelet --no-pager                                                     │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo systemctl status docker --all --full --no-pager                                      │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo systemctl cat docker --no-pager                                                      │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cat /etc/docker/daemon.json                                                          │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo docker system info                                                                   │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cri-dockerd --version                                                                │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo systemctl cat containerd --no-pager                                                  │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cat /etc/containerd/config.toml                                                      │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo containerd config dump                                                               │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo systemctl status crio --all --full --no-pager                                        │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo systemctl cat crio --no-pager                                                        │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo crio config                                                                          │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ delete  │ -p cilium-111074                                                                                           │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │ 18 Oct 25 18:16 UTC │
	│ start   │ -p force-systemd-env-785999 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-785999 │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ pause   │ -p pause-321903 --alsologtostderr -v=5                                                                     │ pause-321903             │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 18:16:57
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 18:16:57.094578  181365 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:16:57.094789  181365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:16:57.094816  181365 out.go:374] Setting ErrFile to fd 2...
	I1018 18:16:57.094838  181365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:16:57.095131  181365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:16:57.095587  181365 out.go:368] Setting JSON to false
	I1018 18:16:57.096522  181365 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7166,"bootTime":1760804251,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 18:16:57.096611  181365 start.go:141] virtualization:  
	I1018 18:16:57.100274  181365 out.go:179] * [force-systemd-env-785999] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 18:16:57.103315  181365 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 18:16:57.103379  181365 notify.go:220] Checking for updates...
	I1018 18:16:57.109356  181365 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 18:16:57.112443  181365 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:16:57.115845  181365 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 18:16:57.118661  181365 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 18:16:57.121542  181365 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1018 18:16:57.125010  181365 config.go:182] Loaded profile config "pause-321903": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:16:57.125107  181365 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 18:16:57.170806  181365 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 18:16:57.170953  181365 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:16:57.321257  181365 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 18:16:57.301212974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:16:57.321364  181365 docker.go:318] overlay module found
	I1018 18:16:57.324614  181365 out.go:179] * Using the docker driver based on user configuration
	I1018 18:16:57.327433  181365 start.go:305] selected driver: docker
	I1018 18:16:57.327456  181365 start.go:925] validating driver "docker" against <nil>
	I1018 18:16:57.327471  181365 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 18:16:57.328203  181365 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:16:57.427258  181365 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 18:16:57.416253319 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:16:57.427406  181365 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 18:16:57.427617  181365 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 18:16:57.430632  181365 out.go:179] * Using Docker driver with root privileges
	I1018 18:16:57.433389  181365 cni.go:84] Creating CNI manager for ""
	I1018 18:16:57.433452  181365 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:16:57.433461  181365 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 18:16:57.433544  181365 start.go:349] cluster config:
	{Name:force-systemd-env-785999 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-785999 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:16:57.436446  181365 out.go:179] * Starting "force-systemd-env-785999" primary control-plane node in "force-systemd-env-785999" cluster
	I1018 18:16:57.439207  181365 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 18:16:57.442103  181365 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 18:16:57.444900  181365 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:16:57.444969  181365 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 18:16:57.444980  181365 cache.go:58] Caching tarball of preloaded images
	I1018 18:16:57.445016  181365 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 18:16:57.445075  181365 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 18:16:57.445084  181365 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 18:16:57.445190  181365 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/config.json ...
	I1018 18:16:57.445207  181365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/config.json: {Name:mk41c0342913787b6b41bfce0198abad5f0c466c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:16:57.475310  181365 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 18:16:57.475331  181365 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 18:16:57.475348  181365 cache.go:232] Successfully downloaded all kic artifacts
	I1018 18:16:57.475372  181365 start.go:360] acquireMachinesLock for force-systemd-env-785999: {Name:mk25aac2754eb303f882f8cecf9a7d47e61b7a50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:16:57.475467  181365 start.go:364] duration metric: took 79.033µs to acquireMachinesLock for "force-systemd-env-785999"
	I1018 18:16:57.475490  181365 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-785999 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-785999 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 18:16:57.475549  181365 start.go:125] createHost starting for "" (driver="docker")
	I1018 18:16:54.173916  179367 addons.go:514] duration metric: took 6.985951ms for enable addons: enabled=[]
	I1018 18:16:54.174048  179367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:16:54.512279  179367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:16:54.537863  179367 node_ready.go:35] waiting up to 6m0s for node "pause-321903" to be "Ready" ...
	I1018 18:16:57.478908  181365 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 18:16:57.479134  181365 start.go:159] libmachine.API.Create for "force-systemd-env-785999" (driver="docker")
	I1018 18:16:57.479168  181365 client.go:168] LocalClient.Create starting
	I1018 18:16:57.479260  181365 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem
	I1018 18:16:57.479307  181365 main.go:141] libmachine: Decoding PEM data...
	I1018 18:16:57.479325  181365 main.go:141] libmachine: Parsing certificate...
	I1018 18:16:57.479379  181365 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem
	I1018 18:16:57.479408  181365 main.go:141] libmachine: Decoding PEM data...
	I1018 18:16:57.479422  181365 main.go:141] libmachine: Parsing certificate...
	I1018 18:16:57.479778  181365 cli_runner.go:164] Run: docker network inspect force-systemd-env-785999 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 18:16:57.502394  181365 cli_runner.go:211] docker network inspect force-systemd-env-785999 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 18:16:57.502474  181365 network_create.go:284] running [docker network inspect force-systemd-env-785999] to gather additional debugging logs...
	I1018 18:16:57.502490  181365 cli_runner.go:164] Run: docker network inspect force-systemd-env-785999
	W1018 18:16:57.526496  181365 cli_runner.go:211] docker network inspect force-systemd-env-785999 returned with exit code 1
	I1018 18:16:57.526526  181365 network_create.go:287] error running [docker network inspect force-systemd-env-785999]: docker network inspect force-systemd-env-785999: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-785999 not found
	I1018 18:16:57.526540  181365 network_create.go:289] output of [docker network inspect force-systemd-env-785999]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-785999 not found
	
	** /stderr **
	I1018 18:16:57.526633  181365 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:16:57.566933  181365 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-903568cdf824 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:7a:80:c0:8c:ed} reservation:<nil>}
	I1018 18:16:57.567239  181365 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ee9fcaab9ca8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:a7:65:1b:c0:41} reservation:<nil>}
	I1018 18:16:57.567514  181365 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-414fc11e154b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:86:f0:a8:1a:86:00} reservation:<nil>}
	I1018 18:16:57.567911  181365 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b7250}
	I1018 18:16:57.567936  181365 network_create.go:124] attempt to create docker network force-systemd-env-785999 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1018 18:16:57.567993  181365 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-785999 force-systemd-env-785999
	I1018 18:16:57.684061  181365 network_create.go:108] docker network force-systemd-env-785999 192.168.76.0/24 created
	I1018 18:16:57.684096  181365 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-785999" container
	I1018 18:16:57.684166  181365 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 18:16:57.705398  181365 cli_runner.go:164] Run: docker volume create force-systemd-env-785999 --label name.minikube.sigs.k8s.io=force-systemd-env-785999 --label created_by.minikube.sigs.k8s.io=true
	I1018 18:16:57.730452  181365 oci.go:103] Successfully created a docker volume force-systemd-env-785999
	I1018 18:16:57.730543  181365 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-785999-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-785999 --entrypoint /usr/bin/test -v force-systemd-env-785999:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 18:16:58.444082  181365 oci.go:107] Successfully prepared a docker volume force-systemd-env-785999
	I1018 18:16:58.444128  181365 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:16:58.444148  181365 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 18:16:58.444228  181365 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-785999:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 18:17:00.486170  179367 node_ready.go:49] node "pause-321903" is "Ready"
	I1018 18:17:00.486202  179367 node_ready.go:38] duration metric: took 5.948302498s for node "pause-321903" to be "Ready" ...
	I1018 18:17:00.486216  179367 api_server.go:52] waiting for apiserver process to appear ...
	I1018 18:17:00.486310  179367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 18:17:00.517989  179367 api_server.go:72] duration metric: took 6.351356474s to wait for apiserver process to appear ...
	I1018 18:17:00.518014  179367 api_server.go:88] waiting for apiserver healthz status ...
	I1018 18:17:00.518035  179367 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 18:17:00.590556  179367 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 18:17:00.590642  179367 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 18:17:01.018187  179367 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 18:17:01.028761  179367 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 18:17:01.028928  179367 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 18:17:01.518137  179367 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 18:17:01.528288  179367 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 18:17:01.528370  179367 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 18:17:02.019035  179367 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 18:17:02.033030  179367 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 18:17:02.034847  179367 api_server.go:141] control plane version: v1.34.1
	I1018 18:17:02.034875  179367 api_server.go:131] duration metric: took 1.516854772s to wait for apiserver health ...
	I1018 18:17:02.034885  179367 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 18:17:02.041579  179367 system_pods.go:59] 7 kube-system pods found
	I1018 18:17:02.041683  179367 system_pods.go:61] "coredns-66bc5c9577-bxt8s" [0fbf5bcd-ba89-4603-85bf-895e985ed0cb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:17:02.041739  179367 system_pods.go:61] "etcd-pause-321903" [15471f42-4862-4c5f-9c9d-855a26a92fbc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 18:17:02.041766  179367 system_pods.go:61] "kindnet-h5sxp" [5909bf09-6c59-4f6e-859c-ac6a5c0792f9] Running
	I1018 18:17:02.041793  179367 system_pods.go:61] "kube-apiserver-pause-321903" [5c4e37f9-1f24-4284-a173-200ccea50d12] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 18:17:02.041833  179367 system_pods.go:61] "kube-controller-manager-pause-321903" [ad95ed65-fc9d-4ea5-aa25-e2d7eba789ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 18:17:02.041857  179367 system_pods.go:61] "kube-proxy-6ntpd" [34df88c7-c080-4704-a7cd-012f263ce7b9] Running
	I1018 18:17:02.041877  179367 system_pods.go:61] "kube-scheduler-pause-321903" [3d6c27c5-e0d8-462b-a0e5-e0c5a4c4000f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 18:17:02.041907  179367 system_pods.go:74] duration metric: took 7.014964ms to wait for pod list to return data ...
	I1018 18:17:02.041934  179367 default_sa.go:34] waiting for default service account to be created ...
	I1018 18:17:02.046140  179367 default_sa.go:45] found service account: "default"
	I1018 18:17:02.046165  179367 default_sa.go:55] duration metric: took 4.209924ms for default service account to be created ...
	I1018 18:17:02.046176  179367 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 18:17:02.051033  179367 system_pods.go:86] 7 kube-system pods found
	I1018 18:17:02.051121  179367 system_pods.go:89] "coredns-66bc5c9577-bxt8s" [0fbf5bcd-ba89-4603-85bf-895e985ed0cb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:17:02.051144  179367 system_pods.go:89] "etcd-pause-321903" [15471f42-4862-4c5f-9c9d-855a26a92fbc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 18:17:02.051166  179367 system_pods.go:89] "kindnet-h5sxp" [5909bf09-6c59-4f6e-859c-ac6a5c0792f9] Running
	I1018 18:17:02.051213  179367 system_pods.go:89] "kube-apiserver-pause-321903" [5c4e37f9-1f24-4284-a173-200ccea50d12] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 18:17:02.051236  179367 system_pods.go:89] "kube-controller-manager-pause-321903" [ad95ed65-fc9d-4ea5-aa25-e2d7eba789ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 18:17:02.051272  179367 system_pods.go:89] "kube-proxy-6ntpd" [34df88c7-c080-4704-a7cd-012f263ce7b9] Running
	I1018 18:17:02.051298  179367 system_pods.go:89] "kube-scheduler-pause-321903" [3d6c27c5-e0d8-462b-a0e5-e0c5a4c4000f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 18:17:02.051321  179367 system_pods.go:126] duration metric: took 5.138371ms to wait for k8s-apps to be running ...
	I1018 18:17:02.051359  179367 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 18:17:02.051478  179367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:17:02.067940  179367 system_svc.go:56] duration metric: took 16.574068ms WaitForService to wait for kubelet
	I1018 18:17:02.068030  179367 kubeadm.go:586] duration metric: took 7.901401064s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:17:02.068081  179367 node_conditions.go:102] verifying NodePressure condition ...
	I1018 18:17:02.072118  179367 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 18:17:02.072158  179367 node_conditions.go:123] node cpu capacity is 2
	I1018 18:17:02.072173  179367 node_conditions.go:105] duration metric: took 4.048535ms to run NodePressure ...
	I1018 18:17:02.072188  179367 start.go:241] waiting for startup goroutines ...
	I1018 18:17:02.072196  179367 start.go:246] waiting for cluster config update ...
	I1018 18:17:02.072208  179367 start.go:255] writing updated cluster config ...
	I1018 18:17:02.074668  179367 ssh_runner.go:195] Run: rm -f paused
	I1018 18:17:02.079501  179367 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:17:02.080091  179367 kapi.go:59] client config for pause-321903: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/pause-321903/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/profiles/pause-321903/client.key", CAFile:"/home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 18:17:02.084821  179367 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bxt8s" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:17:03.239126  181365 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-785999:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.794833877s)
	I1018 18:17:03.239155  181365 kic.go:203] duration metric: took 4.795005129s to extract preloaded images to volume ...
	W1018 18:17:03.239302  181365 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 18:17:03.239419  181365 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 18:17:03.344721  181365 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-785999 --name force-systemd-env-785999 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-785999 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-785999 --network force-systemd-env-785999 --ip 192.168.76.2 --volume force-systemd-env-785999:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 18:17:03.778404  181365 cli_runner.go:164] Run: docker container inspect force-systemd-env-785999 --format={{.State.Running}}
	I1018 18:17:03.811083  181365 cli_runner.go:164] Run: docker container inspect force-systemd-env-785999 --format={{.State.Status}}
	I1018 18:17:03.840985  181365 cli_runner.go:164] Run: docker exec force-systemd-env-785999 stat /var/lib/dpkg/alternatives/iptables
	I1018 18:17:03.912218  181365 oci.go:144] the created container "force-systemd-env-785999" has a running status.
	I1018 18:17:03.912247  181365 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/force-systemd-env-785999/id_rsa...
	I1018 18:17:05.233972  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/force-systemd-env-785999/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1018 18:17:05.234027  181365 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-2509/.minikube/machines/force-systemd-env-785999/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 18:17:05.257304  181365 cli_runner.go:164] Run: docker container inspect force-systemd-env-785999 --format={{.State.Status}}
	I1018 18:17:05.278586  181365 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 18:17:05.278611  181365 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-785999 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 18:17:05.326569  181365 cli_runner.go:164] Run: docker container inspect force-systemd-env-785999 --format={{.State.Status}}
	I1018 18:17:05.346146  181365 machine.go:93] provisionDockerMachine start ...
	I1018 18:17:05.346250  181365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-785999
	I1018 18:17:05.365040  181365 main.go:141] libmachine: Using SSH client type: native
	I1018 18:17:05.365427  181365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1018 18:17:05.365442  181365 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 18:17:05.366134  181365 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1018 18:17:04.102714  179367 pod_ready.go:104] pod "coredns-66bc5c9577-bxt8s" is not "Ready", error: <nil>
	I1018 18:17:05.091857  179367 pod_ready.go:94] pod "coredns-66bc5c9577-bxt8s" is "Ready"
	I1018 18:17:05.091891  179367 pod_ready.go:86] duration metric: took 3.007038204s for pod "coredns-66bc5c9577-bxt8s" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:17:05.095742  179367 pod_ready.go:83] waiting for pod "etcd-pause-321903" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:17:05.102362  179367 pod_ready.go:94] pod "etcd-pause-321903" is "Ready"
	I1018 18:17:05.102392  179367 pod_ready.go:86] duration metric: took 6.619489ms for pod "etcd-pause-321903" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:17:05.105617  179367 pod_ready.go:83] waiting for pod "kube-apiserver-pause-321903" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:17:06.111993  179367 pod_ready.go:94] pod "kube-apiserver-pause-321903" is "Ready"
	I1018 18:17:06.112027  179367 pod_ready.go:86] duration metric: took 1.006380913s for pod "kube-apiserver-pause-321903" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:17:06.115268  179367 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-321903" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 18:17:08.121012  179367 pod_ready.go:104] pod "kube-controller-manager-pause-321903" is not "Ready", error: <nil>
	I1018 18:17:10.122429  179367 pod_ready.go:94] pod "kube-controller-manager-pause-321903" is "Ready"
	I1018 18:17:10.122453  179367 pod_ready.go:86] duration metric: took 4.007147706s for pod "kube-controller-manager-pause-321903" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:17:10.125468  179367 pod_ready.go:83] waiting for pod "kube-proxy-6ntpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:17:10.130891  179367 pod_ready.go:94] pod "kube-proxy-6ntpd" is "Ready"
	I1018 18:17:10.130913  179367 pod_ready.go:86] duration metric: took 5.417586ms for pod "kube-proxy-6ntpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:17:10.134074  179367 pod_ready.go:83] waiting for pod "kube-scheduler-pause-321903" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:17:10.888472  179367 pod_ready.go:94] pod "kube-scheduler-pause-321903" is "Ready"
	I1018 18:17:10.888494  179367 pod_ready.go:86] duration metric: took 754.384763ms for pod "kube-scheduler-pause-321903" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:17:10.888506  179367 pod_ready.go:40] duration metric: took 8.808969494s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:17:10.976656  179367 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 18:17:10.980010  179367 out.go:179] * Done! kubectl is now configured to use "pause-321903" cluster and "default" namespace by default
	I1018 18:17:08.520917  181365 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-785999
	
	I1018 18:17:08.520966  181365 ubuntu.go:182] provisioning hostname "force-systemd-env-785999"
	I1018 18:17:08.521031  181365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-785999
	I1018 18:17:08.539300  181365 main.go:141] libmachine: Using SSH client type: native
	I1018 18:17:08.539628  181365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1018 18:17:08.539646  181365 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-785999 && echo "force-systemd-env-785999" | sudo tee /etc/hostname
	I1018 18:17:08.699364  181365 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-785999
	
	I1018 18:17:08.699463  181365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-785999
	I1018 18:17:08.717159  181365 main.go:141] libmachine: Using SSH client type: native
	I1018 18:17:08.717471  181365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1018 18:17:08.717494  181365 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-785999' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-785999/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-785999' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 18:17:08.873126  181365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 18:17:08.873151  181365 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 18:17:08.873170  181365 ubuntu.go:190] setting up certificates
	I1018 18:17:08.873179  181365 provision.go:84] configureAuth start
	I1018 18:17:08.873254  181365 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-785999
	I1018 18:17:08.895849  181365 provision.go:143] copyHostCerts
	I1018 18:17:08.895887  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 18:17:08.895919  181365 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 18:17:08.895926  181365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 18:17:08.896007  181365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 18:17:08.896091  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 18:17:08.896107  181365 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 18:17:08.896111  181365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 18:17:08.896136  181365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 18:17:08.896204  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 18:17:08.896221  181365 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 18:17:08.896225  181365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 18:17:08.896248  181365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 18:17:08.896300  181365 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-785999 san=[127.0.0.1 192.168.76.2 force-systemd-env-785999 localhost minikube]
	I1018 18:17:09.372234  181365 provision.go:177] copyRemoteCerts
	I1018 18:17:09.372311  181365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 18:17:09.372355  181365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-785999
	I1018 18:17:09.389213  181365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/force-systemd-env-785999/id_rsa Username:docker}
	I1018 18:17:09.492669  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 18:17:09.492742  181365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 18:17:09.510628  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 18:17:09.510688  181365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1018 18:17:09.528645  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 18:17:09.528760  181365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1018 18:17:09.550175  181365 provision.go:87] duration metric: took 676.973331ms to configureAuth
	I1018 18:17:09.550202  181365 ubuntu.go:206] setting minikube options for container-runtime
	I1018 18:17:09.550412  181365 config.go:182] Loaded profile config "force-systemd-env-785999": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:17:09.550526  181365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-785999
	I1018 18:17:09.570581  181365 main.go:141] libmachine: Using SSH client type: native
	I1018 18:17:09.570889  181365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1018 18:17:09.570909  181365 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 18:17:09.851021  181365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 18:17:09.851044  181365 machine.go:96] duration metric: took 4.504870843s to provisionDockerMachine
	I1018 18:17:09.851064  181365 client.go:171] duration metric: took 12.371886762s to LocalClient.Create
	I1018 18:17:09.851078  181365 start.go:167] duration metric: took 12.371944207s to libmachine.API.Create "force-systemd-env-785999"
	I1018 18:17:09.851088  181365 start.go:293] postStartSetup for "force-systemd-env-785999" (driver="docker")
	I1018 18:17:09.851098  181365 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 18:17:09.851170  181365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 18:17:09.851220  181365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-785999
	I1018 18:17:09.885086  181365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/force-systemd-env-785999/id_rsa Username:docker}
	I1018 18:17:10.014745  181365 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 18:17:10.019072  181365 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 18:17:10.019105  181365 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 18:17:10.019118  181365 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 18:17:10.019183  181365 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 18:17:10.019279  181365 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 18:17:10.019290  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /etc/ssl/certs/43202.pem
	I1018 18:17:10.019399  181365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 18:17:10.027893  181365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:17:10.048403  181365 start.go:296] duration metric: took 197.300233ms for postStartSetup
	I1018 18:17:10.048809  181365 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-785999
	I1018 18:17:10.067029  181365 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/config.json ...
	I1018 18:17:10.067313  181365 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 18:17:10.067361  181365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-785999
	I1018 18:17:10.084577  181365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/force-systemd-env-785999/id_rsa Username:docker}
	I1018 18:17:10.190253  181365 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 18:17:10.194762  181365 start.go:128] duration metric: took 12.719198688s to createHost
	I1018 18:17:10.194794  181365 start.go:83] releasing machines lock for "force-systemd-env-785999", held for 12.719319674s
	I1018 18:17:10.194862  181365 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-785999
	I1018 18:17:10.211228  181365 ssh_runner.go:195] Run: cat /version.json
	I1018 18:17:10.211290  181365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-785999
	I1018 18:17:10.211543  181365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 18:17:10.211601  181365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-785999
	I1018 18:17:10.229697  181365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/force-systemd-env-785999/id_rsa Username:docker}
	I1018 18:17:10.230994  181365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/force-systemd-env-785999/id_rsa Username:docker}
	I1018 18:17:10.418318  181365 ssh_runner.go:195] Run: systemctl --version
	I1018 18:17:10.424562  181365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 18:17:10.460231  181365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 18:17:10.465450  181365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 18:17:10.465519  181365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 18:17:10.495422  181365 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 18:17:10.495499  181365 start.go:495] detecting cgroup driver to use...
	I1018 18:17:10.495529  181365 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1018 18:17:10.495606  181365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 18:17:10.527903  181365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 18:17:10.547564  181365 docker.go:218] disabling cri-docker service (if available) ...
	I1018 18:17:10.547628  181365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 18:17:10.566319  181365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 18:17:10.583413  181365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 18:17:10.708899  181365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 18:17:10.846176  181365 docker.go:234] disabling docker service ...
	I1018 18:17:10.846288  181365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 18:17:10.868618  181365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 18:17:10.883027  181365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 18:17:11.085049  181365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 18:17:11.238338  181365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 18:17:11.256095  181365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 18:17:11.277049  181365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 18:17:11.277148  181365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:17:11.291011  181365 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 18:17:11.291084  181365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:17:11.302995  181365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:17:11.315447  181365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:17:11.324007  181365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 18:17:11.333382  181365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:17:11.342240  181365 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:17:11.356055  181365 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:17:11.364816  181365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 18:17:11.373576  181365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 18:17:11.381941  181365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:17:11.549027  181365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 18:17:11.693546  181365 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 18:17:11.693667  181365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 18:17:11.705533  181365 start.go:563] Will wait 60s for crictl version
	I1018 18:17:11.705597  181365 ssh_runner.go:195] Run: which crictl
	I1018 18:17:11.709433  181365 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 18:17:11.739885  181365 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 18:17:11.740037  181365 ssh_runner.go:195] Run: crio --version
	I1018 18:17:11.776659  181365 ssh_runner.go:195] Run: crio --version
	I1018 18:17:11.810175  181365 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 18:17:11.813095  181365 cli_runner.go:164] Run: docker network inspect force-systemd-env-785999 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:17:11.830239  181365 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 18:17:11.834208  181365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:17:11.843823  181365 kubeadm.go:883] updating cluster {Name:force-systemd-env-785999 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-785999 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 18:17:11.843933  181365 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:17:11.843992  181365 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:17:11.879125  181365 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:17:11.879148  181365 crio.go:433] Images already preloaded, skipping extraction
	I1018 18:17:11.879204  181365 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:17:11.906290  181365 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:17:11.906311  181365 cache_images.go:85] Images are preloaded, skipping loading
	I1018 18:17:11.906319  181365 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 18:17:11.906449  181365 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-785999 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-785999 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 18:17:11.906532  181365 ssh_runner.go:195] Run: crio config
	I1018 18:17:11.985313  181365 cni.go:84] Creating CNI manager for ""
	I1018 18:17:11.985378  181365 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:17:11.985412  181365 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 18:17:11.985464  181365 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-785999 NodeName:force-systemd-env-785999 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 18:17:11.985615  181365 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-785999"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 18:17:11.985701  181365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 18:17:11.994576  181365 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 18:17:11.994686  181365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 18:17:12.003724  181365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1018 18:17:12.026944  181365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 18:17:12.047261  181365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1018 18:17:12.075408  181365 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 18:17:12.079624  181365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:17:12.090410  181365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:17:12.240815  181365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:17:12.269624  181365 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999 for IP: 192.168.76.2
	I1018 18:17:12.269698  181365 certs.go:195] generating shared ca certs ...
	I1018 18:17:12.269727  181365 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:17:12.269908  181365 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 18:17:12.269978  181365 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 18:17:12.270000  181365 certs.go:257] generating profile certs ...
	I1018 18:17:12.270088  181365 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/client.key
	I1018 18:17:12.270124  181365 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/client.crt with IP's: []
	I1018 18:17:12.742072  181365 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/client.crt ...
	I1018 18:17:12.742102  181365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/client.crt: {Name:mk17c2b76d596318e8a1e6921147650fe285c3ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:17:12.742304  181365 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/client.key ...
	I1018 18:17:12.742322  181365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/client.key: {Name:mk31d82c54e950f02e6521e2753b96892e6fa694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:17:12.742413  181365 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/apiserver.key.7dcfce74
	I1018 18:17:12.742429  181365 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/apiserver.crt.7dcfce74 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1018 18:17:13.188113  181365 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/apiserver.crt.7dcfce74 ...
	I1018 18:17:13.188143  181365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/apiserver.crt.7dcfce74: {Name:mk9604a33b01e0f64dac92dff1a680f5fa3fc5f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:17:13.188406  181365 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/apiserver.key.7dcfce74 ...
	I1018 18:17:13.188444  181365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/apiserver.key.7dcfce74: {Name:mk684023db077a52acea7f768b24b602dd4f73fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:17:13.188586  181365 certs.go:382] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/apiserver.crt.7dcfce74 -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/apiserver.crt
	I1018 18:17:13.188689  181365 certs.go:386] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/apiserver.key.7dcfce74 -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/apiserver.key
	I1018 18:17:13.188750  181365 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/proxy-client.key
	I1018 18:17:13.188771  181365 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/proxy-client.crt with IP's: []
	I1018 18:17:13.992840  181365 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/proxy-client.crt ...
	I1018 18:17:13.992912  181365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/proxy-client.crt: {Name:mk47cfd2be927206e8b5974ce60bdd0227429d8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:17:13.993352  181365 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/proxy-client.key ...
	I1018 18:17:13.993392  181365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/proxy-client.key: {Name:mk20e2751243b2b97563724566597171fa84a3ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:17:13.993529  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 18:17:13.993594  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 18:17:13.993626  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 18:17:13.993672  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 18:17:13.993708  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 18:17:13.993741  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 18:17:13.993784  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 18:17:13.993815  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 18:17:13.993894  181365 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 18:17:13.993950  181365 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 18:17:13.993974  181365 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 18:17:13.994030  181365 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 18:17:13.994078  181365 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 18:17:13.994116  181365 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 18:17:13.994186  181365 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:17:13.994236  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> /usr/share/ca-certificates/43202.pem
	I1018 18:17:13.994265  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:17:13.994304  181365 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem -> /usr/share/ca-certificates/4320.pem
	I1018 18:17:13.994838  181365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 18:17:14.016835  181365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 18:17:14.039594  181365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 18:17:14.062192  181365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 18:17:14.087765  181365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1018 18:17:14.107571  181365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 18:17:14.129035  181365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 18:17:14.148247  181365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/force-systemd-env-785999/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 18:17:14.173678  181365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 18:17:14.195584  181365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 18:17:14.218098  181365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 18:17:14.246970  181365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 18:17:14.261375  181365 ssh_runner.go:195] Run: openssl version
	I1018 18:17:14.268593  181365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 18:17:14.278498  181365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:17:14.283869  181365 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:17:14.283915  181365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:17:14.332852  181365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 18:17:14.341429  181365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 18:17:14.352154  181365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 18:17:14.358080  181365 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 18:17:14.358184  181365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 18:17:14.407746  181365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 18:17:14.416194  181365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 18:17:14.424700  181365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 18:17:14.429365  181365 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 18:17:14.429490  181365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 18:17:14.482470  181365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 18:17:14.494920  181365 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 18:17:14.501962  181365 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 18:17:14.502061  181365 kubeadm.go:400] StartCluster: {Name:force-systemd-env-785999 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-785999 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:17:14.502163  181365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 18:17:14.502243  181365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 18:17:14.545764  181365 cri.go:89] found id: ""
	I1018 18:17:14.545913  181365 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 18:17:14.558565  181365 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 18:17:14.575564  181365 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 18:17:14.575681  181365 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 18:17:14.587581  181365 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 18:17:14.587650  181365 kubeadm.go:157] found existing configuration files:
	
	I1018 18:17:14.587727  181365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 18:17:14.596785  181365 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 18:17:14.596897  181365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 18:17:14.604911  181365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 18:17:14.613597  181365 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 18:17:14.613716  181365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 18:17:14.622226  181365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 18:17:14.631252  181365 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 18:17:14.631378  181365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 18:17:14.639581  181365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 18:17:14.648506  181365 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 18:17:14.648626  181365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 18:17:14.656098  181365 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 18:17:14.713774  181365 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 18:17:14.721701  181365 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 18:17:14.750741  181365 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 18:17:14.750863  181365 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 18:17:14.750934  181365 kubeadm.go:318] OS: Linux
	I1018 18:17:14.751006  181365 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 18:17:14.751097  181365 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 18:17:14.751181  181365 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 18:17:14.751259  181365 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 18:17:14.751342  181365 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 18:17:14.751422  181365 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 18:17:14.751494  181365 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 18:17:14.751573  181365 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 18:17:14.751645  181365 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 18:17:14.844905  181365 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 18:17:14.845101  181365 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 18:17:14.845236  181365 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 18:17:14.855551  181365 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Oct 18 18:16:52 pause-321903 crio[2063]: time="2025-10-18T18:16:52.852247049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:16:52 pause-321903 crio[2063]: time="2025-10-18T18:16:52.852563008Z" level=info msg="Started container" PID=2278 containerID=e27366c82d5cb638c304a969a81298d5df85aada0411f1d79cdf701c215ca024 description=kube-system/etcd-pause-321903/etcd id=7c672b8e-031d-4429-bba4-f5d138a8a6bb name=/runtime.v1.RuntimeService/StartContainer sandboxID=aed3c896387540511a7e2e2e6f63ecba67ac0b50c030316f5c23bc6acec84ce0
	Oct 18 18:16:52 pause-321903 crio[2063]: time="2025-10-18T18:16:52.881980528Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:16:52 pause-321903 crio[2063]: time="2025-10-18T18:16:52.918554208Z" level=info msg="Started container" PID=2277 containerID=657241d3f85a70a79a49eceb02219d308003745e67fb44fd088e3e9c4b8e4772 description=kube-system/kube-proxy-6ntpd/kube-proxy id=07c2a9a7-091d-4f67-890d-f78deea29941 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d835638d52db0ee13b9e43edcfd04fa446327618e1591db419ac965337592d97
	Oct 18 18:16:52 pause-321903 crio[2063]: time="2025-10-18T18:16:52.919064598Z" level=info msg="Created container b1f5de574a87e23c6882f0de27620ef3b5dcc7d2dd2cca428ed032ed283b7f17: kube-system/kube-controller-manager-pause-321903/kube-controller-manager" id=9d302fb0-db83-4ad4-99a7-788889128d30 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:16:52 pause-321903 crio[2063]: time="2025-10-18T18:16:52.944421013Z" level=info msg="Starting container: b1f5de574a87e23c6882f0de27620ef3b5dcc7d2dd2cca428ed032ed283b7f17" id=c76cbdf4-6151-4eef-ad0e-953a037e7443 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:16:52 pause-321903 crio[2063]: time="2025-10-18T18:16:52.949624492Z" level=info msg="Started container" PID=2294 containerID=b1f5de574a87e23c6882f0de27620ef3b5dcc7d2dd2cca428ed032ed283b7f17 description=kube-system/kube-controller-manager-pause-321903/kube-controller-manager id=c76cbdf4-6151-4eef-ad0e-953a037e7443 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8a3ffc30a98593f74d945573b23260d7e3f58bfd0d2070faeb5a0d0582f31de4
	Oct 18 18:16:53 pause-321903 crio[2063]: time="2025-10-18T18:16:53.024470483Z" level=info msg="Created container 509dbb4e9ee8a1d111116d82883b8bcaf502fb1be8d2a0478c3a9c6a300aa9c9: kube-system/kube-apiserver-pause-321903/kube-apiserver" id=bf009dcc-fa76-4f8c-b232-caa3b492048e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:16:53 pause-321903 crio[2063]: time="2025-10-18T18:16:53.033295382Z" level=info msg="Starting container: 509dbb4e9ee8a1d111116d82883b8bcaf502fb1be8d2a0478c3a9c6a300aa9c9" id=ff21e651-efad-4423-bb58-135cd96072a7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:16:53 pause-321903 crio[2063]: time="2025-10-18T18:16:53.035541392Z" level=info msg="Started container" PID=2320 containerID=509dbb4e9ee8a1d111116d82883b8bcaf502fb1be8d2a0478c3a9c6a300aa9c9 description=kube-system/kube-apiserver-pause-321903/kube-apiserver id=ff21e651-efad-4423-bb58-135cd96072a7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9c94bbddf8ea0e16c2810e8d588df09313708a62714287940ec81408e87a12ea
	Oct 18 18:16:53 pause-321903 crio[2063]: time="2025-10-18T18:16:53.057362074Z" level=info msg="Created container 1230f9e5cf3e9675dbf837caca1f6efe823b10055878ee033b6d97fdec3c4a73: kube-system/kube-scheduler-pause-321903/kube-scheduler" id=c3eba72b-862f-4e98-bbc6-a8e10e051237 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:16:53 pause-321903 crio[2063]: time="2025-10-18T18:16:53.058004272Z" level=info msg="Starting container: 1230f9e5cf3e9675dbf837caca1f6efe823b10055878ee033b6d97fdec3c4a73" id=ddd5283b-6b6f-4387-b11f-5c0a54bbec94 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:16:53 pause-321903 crio[2063]: time="2025-10-18T18:16:53.059928219Z" level=info msg="Started container" PID=2338 containerID=1230f9e5cf3e9675dbf837caca1f6efe823b10055878ee033b6d97fdec3c4a73 description=kube-system/kube-scheduler-pause-321903/kube-scheduler id=ddd5283b-6b6f-4387-b11f-5c0a54bbec94 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9197cda6c5ed473b673a65209627eec1c79efb169d6a41ea11d5889c90099a7e
	Oct 18 18:17:03 pause-321903 crio[2063]: time="2025-10-18T18:17:03.261548555Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:17:03 pause-321903 crio[2063]: time="2025-10-18T18:17:03.266960045Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:17:03 pause-321903 crio[2063]: time="2025-10-18T18:17:03.267004345Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:17:03 pause-321903 crio[2063]: time="2025-10-18T18:17:03.267029371Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:17:03 pause-321903 crio[2063]: time="2025-10-18T18:17:03.279535161Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:17:03 pause-321903 crio[2063]: time="2025-10-18T18:17:03.279577418Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:17:03 pause-321903 crio[2063]: time="2025-10-18T18:17:03.279600811Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:17:03 pause-321903 crio[2063]: time="2025-10-18T18:17:03.289495287Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:17:03 pause-321903 crio[2063]: time="2025-10-18T18:17:03.289533975Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:17:03 pause-321903 crio[2063]: time="2025-10-18T18:17:03.289557048Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:17:03 pause-321903 crio[2063]: time="2025-10-18T18:17:03.295793541Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:17:03 pause-321903 crio[2063]: time="2025-10-18T18:17:03.295998475Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	1230f9e5cf3e9       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   24 seconds ago       Running             kube-scheduler            1                   9197cda6c5ed4       kube-scheduler-pause-321903            kube-system
	509dbb4e9ee8a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   24 seconds ago       Running             kube-apiserver            1                   9c94bbddf8ea0       kube-apiserver-pause-321903            kube-system
	b1f5de574a87e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   24 seconds ago       Running             kube-controller-manager   1                   8a3ffc30a9859       kube-controller-manager-pause-321903   kube-system
	0a87401b26f8f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Running             coredns                   1                   20e0edde1fed0       coredns-66bc5c9577-bxt8s               kube-system
	4afe595d4901a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago       Running             kindnet-cni               1                   6db6d133cccd2       kindnet-h5sxp                          kube-system
	e27366c82d5cb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   24 seconds ago       Running             etcd                      1                   aed3c89638754       etcd-pause-321903                      kube-system
	657241d3f85a7       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   24 seconds ago       Running             kube-proxy                1                   d835638d52db0       kube-proxy-6ntpd                       kube-system
	b0ab4ec6d0c28       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   47 seconds ago       Exited              coredns                   0                   20e0edde1fed0       coredns-66bc5c9577-bxt8s               kube-system
	d8d01459be672       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   6db6d133cccd2       kindnet-h5sxp                          kube-system
	fb3fca7cd1009       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   d835638d52db0       kube-proxy-6ntpd                       kube-system
	2438a9ff996c0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   9c94bbddf8ea0       kube-apiserver-pause-321903            kube-system
	fe01b2bdff4a1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   8a3ffc30a9859       kube-controller-manager-pause-321903   kube-system
	9b891540c5dec       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   9197cda6c5ed4       kube-scheduler-pause-321903            kube-system
	e1cc985af8447       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   aed3c89638754       etcd-pause-321903                      kube-system
	
	
	==> coredns [0a87401b26f8fe5eca4265d3f61980cf50be35c9ad6297578a3c1117545e88e9] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39864 - 25223 "HINFO IN 7704298137972469783.8140309687720505532. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020442227s
	
	
	==> coredns [b0ab4ec6d0c28b9d0f51329318dcec692b7c5e3207c1daea9fe798392dcf0b44] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54196 - 12069 "HINFO IN 4332264017597495645.7173261532093308133. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.042550666s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-321903
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-321903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=pause-321903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T18_15_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 18:15:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-321903
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 18:17:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 18:16:29 +0000   Sat, 18 Oct 2025 18:15:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 18:16:29 +0000   Sat, 18 Oct 2025 18:15:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 18:16:29 +0000   Sat, 18 Oct 2025 18:15:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 18:16:29 +0000   Sat, 18 Oct 2025 18:16:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-321903
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                99a5aadd-8fb2-4a98-b85f-359bf169b051
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-bxt8s                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     90s
	  kube-system                 etcd-pause-321903                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         95s
	  kube-system                 kindnet-h5sxp                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      90s
	  kube-system                 kube-apiserver-pause-321903             250m (12%)    0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-controller-manager-pause-321903    200m (10%)    0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-proxy-6ntpd                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-scheduler-pause-321903             100m (5%)     0 (0%)      0 (0%)           0 (0%)         96s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 88s   kube-proxy       
	  Normal   Starting                 16s   kube-proxy       
	  Normal   Starting                 95s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 95s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  95s   kubelet          Node pause-321903 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    95s   kubelet          Node pause-321903 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     95s   kubelet          Node pause-321903 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           91s   node-controller  Node pause-321903 event: Registered Node pause-321903 in Controller
	  Normal   NodeReady                48s   kubelet          Node pause-321903 status is now: NodeReady
	  Normal   RegisteredNode           14s   node-controller  Node pause-321903 event: Registered Node pause-321903 in Controller
	
	
	==> dmesg <==
	[Oct18 17:48] overlayfs: idmapped layers are currently not supported
	[  +2.594489] overlayfs: idmapped layers are currently not supported
	[Oct18 17:50] overlayfs: idmapped layers are currently not supported
	[ +42.240353] overlayfs: idmapped layers are currently not supported
	[Oct18 17:51] overlayfs: idmapped layers are currently not supported
	[Oct18 17:53] overlayfs: idmapped layers are currently not supported
	[Oct18 17:58] overlayfs: idmapped layers are currently not supported
	[ +33.320958] overlayfs: idmapped layers are currently not supported
	[Oct18 18:00] overlayfs: idmapped layers are currently not supported
	[Oct18 18:01] overlayfs: idmapped layers are currently not supported
	[Oct18 18:02] overlayfs: idmapped layers are currently not supported
	[Oct18 18:04] overlayfs: idmapped layers are currently not supported
	[ +24.403909] overlayfs: idmapped layers are currently not supported
	[  +6.162774] overlayfs: idmapped layers are currently not supported
	[Oct18 18:05] overlayfs: idmapped layers are currently not supported
	[ +25.128760] overlayfs: idmapped layers are currently not supported
	[Oct18 18:06] overlayfs: idmapped layers are currently not supported
	[Oct18 18:07] overlayfs: idmapped layers are currently not supported
	[Oct18 18:08] overlayfs: idmapped layers are currently not supported
	[Oct18 18:09] overlayfs: idmapped layers are currently not supported
	[Oct18 18:11] overlayfs: idmapped layers are currently not supported
	[Oct18 18:13] overlayfs: idmapped layers are currently not supported
	[ +30.969240] overlayfs: idmapped layers are currently not supported
	[Oct18 18:15] overlayfs: idmapped layers are currently not supported
	[Oct18 18:16] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e1cc985af8447c811346ce79c30a2b8fa589396bae84a02969101095e0145ae8] <==
	{"level":"warn","ts":"2025-10-18T18:15:38.669175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:15:38.685966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:15:38.707037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:15:38.727125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:15:38.741517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:15:38.756606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:15:38.824472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39278","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T18:16:44.410900Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T18:16:44.410951Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-321903","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-18T18:16:44.411032Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T18:16:44.411086Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T18:16:44.552424Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T18:16:44.552511Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-10-18T18:16:44.552622Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-18T18:16:44.552640Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-18T18:16:44.552985Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T18:16:44.553016Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T18:16:44.553026Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T18:16:44.552892Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T18:16:44.553046Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T18:16:44.553053Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T18:16:44.555916Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-18T18:16:44.556003Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T18:16:44.556043Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-18T18:16:44.556051Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-321903","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [e27366c82d5cb638c304a969a81298d5df85aada0411f1d79cdf701c215ca024] <==
	{"level":"warn","ts":"2025-10-18T18:16:56.980409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.046858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.123368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.169458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.253114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.322881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.378407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.442777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.493399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.551043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.609898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.650366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.686403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.739807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.768329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.841095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.876323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.941961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:57.985491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:58.023653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:58.122457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:58.188730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:58.224539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:58.227689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:16:58.355927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50712","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:17:17 up  1:59,  0 user,  load average: 2.99, 3.16, 2.61
	Linux pause-321903 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4afe595d4901a5c13fe23c2f892b2c0e58181ac48b98ee859ee099da4d4a1607] <==
	I1018 18:16:53.017829       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 18:16:53.019399       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 18:16:53.019522       1 main.go:148] setting mtu 1500 for CNI 
	I1018 18:16:53.019580       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 18:16:53.019920       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T18:16:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 18:16:53.273290       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 18:16:53.273324       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 18:16:53.273336       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 18:16:53.274291       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 18:17:00.577096       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 18:17:00.577273       1 metrics.go:72] Registering metrics
	I1018 18:17:00.577379       1 controller.go:711] "Syncing nftables rules"
	I1018 18:17:03.261159       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 18:17:03.261216       1 main.go:301] handling current node
	I1018 18:17:13.260990       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 18:17:13.261047       1 main.go:301] handling current node
	
	
	==> kindnet [d8d01459be672ec4fd8b85084bd40212b1672ef1c3a3885c3617da35e9c4fb8b] <==
	I1018 18:15:48.620030       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 18:15:48.620453       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 18:15:48.620636       1 main.go:148] setting mtu 1500 for CNI 
	I1018 18:15:48.620681       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 18:15:48.620726       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T18:15:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 18:15:48.814934       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 18:15:48.814966       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 18:15:48.814976       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 18:15:48.815282       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 18:16:18.815345       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 18:16:18.815421       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 18:16:18.815542       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 18:16:18.815653       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 18:16:20.415980       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 18:16:20.416054       1 metrics.go:72] Registering metrics
	I1018 18:16:20.416132       1 controller.go:711] "Syncing nftables rules"
	I1018 18:16:28.817003       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 18:16:28.817164       1 main.go:301] handling current node
	I1018 18:16:38.821039       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 18:16:38.821143       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2438a9ff996c0d245fc840f1c24567b79350ac528f1466e447512ce99687f671] <==
	0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.431913       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.431953       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.431991       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432031       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432073       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432114       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432154       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432198       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432237       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432281       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432325       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432366       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432407       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432445       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432484       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432524       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432565       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432603       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432645       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.432697       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.441502       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.441583       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.441660       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 18:16:44.447089       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [509dbb4e9ee8a1d111116d82883b8bcaf502fb1be8d2a0478c3a9c6a300aa9c9] <==
	I1018 18:17:00.296161       1 aggregator.go:171] initial CRD sync complete...
	I1018 18:17:00.297915       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 18:17:00.297956       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 18:17:00.297145       1 policy_source.go:240] refreshing policies
	I1018 18:17:00.307546       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 18:17:00.321036       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 18:17:00.317563       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 18:17:00.447105       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 18:17:00.449002       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 18:17:00.449270       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 18:17:00.449554       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 18:17:00.449682       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 18:17:00.457702       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 18:17:00.321188       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 18:17:00.508321       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 18:17:00.435936       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 18:17:00.442421       1 cache.go:39] Caches are synced for autoregister controller
	I1018 18:17:00.570352       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1018 18:17:00.581626       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 18:17:00.655951       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 18:17:02.166564       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 18:17:03.706734       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 18:17:03.822496       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 18:17:03.911829       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 18:17:03.960605       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [b1f5de574a87e23c6882f0de27620ef3b5dcc7d2dd2cca428ed032ed283b7f17] <==
	I1018 18:17:03.657586       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 18:17:03.658844       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 18:17:03.664496       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 18:17:03.667719       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 18:17:03.667892       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 18:17:03.675306       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 18:17:03.679146       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 18:17:03.679243       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 18:17:03.681141       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 18:17:03.684273       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 18:17:03.684612       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 18:17:03.693109       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 18:17:03.694906       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 18:17:03.695028       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 18:17:03.695121       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-321903"
	I1018 18:17:03.695171       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 18:17:03.699167       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 18:17:03.700317       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 18:17:03.706970       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 18:17:03.707150       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 18:17:03.711335       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 18:17:03.717107       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 18:17:03.717319       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 18:17:03.721405       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 18:17:03.722953       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-controller-manager [fe01b2bdff4a1ff4347ebb1876a7fe94ea12cf41798722170b21b95c0ea7477c] <==
	I1018 18:15:46.623422       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 18:15:46.623807       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 18:15:46.623947       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 18:15:46.624166       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 18:15:46.632258       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-321903" podCIDRs=["10.244.0.0/24"]
	I1018 18:15:46.632395       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 18:15:46.635021       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 18:15:46.635484       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 18:15:46.635528       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 18:15:46.635555       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 18:15:46.635644       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 18:15:46.642235       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 18:15:46.642309       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 18:15:46.657740       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 18:15:46.666927       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 18:15:46.666957       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 18:15:46.666965       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 18:15:46.667034       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 18:15:46.667118       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 18:15:46.667229       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-321903"
	I1018 18:15:46.667284       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 18:15:46.667560       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 18:15:46.693490       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 18:15:46.732824       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 18:16:31.675406       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [657241d3f85a70a79a49eceb02219d308003745e67fb44fd088e3e9c4b8e4772] <==
	I1018 18:16:56.237919       1 server_linux.go:53] "Using iptables proxy"
	I1018 18:16:58.767811       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 18:17:00.677038       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 18:17:00.677185       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 18:17:00.677341       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 18:17:01.313876       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 18:17:01.313946       1 server_linux.go:132] "Using iptables Proxier"
	I1018 18:17:01.476512       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 18:17:01.477114       1 server.go:527] "Version info" version="v1.34.1"
	I1018 18:17:01.477370       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:17:01.478894       1 config.go:200] "Starting service config controller"
	I1018 18:17:01.479017       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 18:17:01.479068       1 config.go:106] "Starting endpoint slice config controller"
	I1018 18:17:01.479130       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 18:17:01.479223       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 18:17:01.479254       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 18:17:01.612984       1 config.go:309] "Starting node config controller"
	I1018 18:17:01.621382       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 18:17:01.686574       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 18:17:01.727351       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 18:17:01.784769       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 18:17:01.786067       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [fb3fca7cd1009ed922f3f234148284e89a8be61f5965386518f93c0ab5ecbb2d] <==
	I1018 18:15:48.445060       1 server_linux.go:53] "Using iptables proxy"
	I1018 18:15:48.642653       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 18:15:48.743757       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 18:15:48.743873       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 18:15:48.743984       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 18:15:48.761982       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 18:15:48.762036       1 server_linux.go:132] "Using iptables Proxier"
	I1018 18:15:48.766145       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 18:15:48.766458       1 server.go:527] "Version info" version="v1.34.1"
	I1018 18:15:48.766482       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:15:48.769873       1 config.go:106] "Starting endpoint slice config controller"
	I1018 18:15:48.769949       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 18:15:48.770280       1 config.go:200] "Starting service config controller"
	I1018 18:15:48.770362       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 18:15:48.772191       1 config.go:309] "Starting node config controller"
	I1018 18:15:48.772210       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 18:15:48.772218       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 18:15:48.772630       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 18:15:48.772647       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 18:15:48.870749       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 18:15:48.870762       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 18:15:48.873282       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [1230f9e5cf3e9675dbf837caca1f6efe823b10055878ee033b6d97fdec3c4a73] <==
	I1018 18:16:57.659159       1 serving.go:386] Generated self-signed cert in-memory
	I1018 18:17:02.506635       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 18:17:02.506676       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:17:02.531866       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 18:17:02.531979       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 18:17:02.532007       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 18:17:02.532045       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 18:17:02.538449       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:17:02.538488       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:17:02.538512       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:17:02.538520       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:17:02.632169       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 18:17:02.639660       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:17:02.639662       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [9b891540c5deca63a1228b886f79648205e27476e41e85a70e1a582b416d1b3f] <==
	E1018 18:15:39.641909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 18:15:39.642073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 18:15:39.642140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 18:15:39.642187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 18:15:40.450993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 18:15:40.454448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 18:15:40.506516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 18:15:40.506741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 18:15:40.526371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 18:15:40.526649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 18:15:40.559670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 18:15:40.591759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 18:15:40.614688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 18:15:40.652369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 18:15:40.749329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 18:15:40.826304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 18:15:40.909272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 18:15:40.916372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1018 18:15:43.605802       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:16:44.425948       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 18:16:44.427945       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 18:16:44.427967       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 18:16:44.428078       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:16:44.429148       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 18:16:44.429177       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.629153    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-h5sxp\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="5909bf09-6c59-4f6e-859c-ac6a5c0792f9" pod="kube-system/kindnet-h5sxp"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.629614    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6ntpd\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="34df88c7-c080-4704-a7cd-012f263ce7b9" pod="kube-system/kube-proxy-6ntpd"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.629949    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-bxt8s\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0fbf5bcd-ba89-4603-85bf-895e985ed0cb" pod="kube-system/coredns-66bc5c9577-bxt8s"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.630315    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-321903\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="676b2f286830ba93b13188fe018ad23f" pod="kube-system/kube-controller-manager-pause-321903"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.630627    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-321903\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="817ca23bfd256fb34800f207379338f6" pod="kube-system/kube-apiserver-pause-321903"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.630931    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-321903\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a3ead399c9d5e599281fbbf31ce37802" pod="kube-system/etcd-pause-321903"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.631232    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-321903\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d67c068537ef56c5aeb9ec4b71ea396f" pod="kube-system/kube-scheduler-pause-321903"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: I1018 18:16:52.634013    1307 scope.go:117] "RemoveContainer" containerID="9b891540c5deca63a1228b886f79648205e27476e41e85a70e1a582b416d1b3f"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.634406    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-bxt8s\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0fbf5bcd-ba89-4603-85bf-895e985ed0cb" pod="kube-system/coredns-66bc5c9577-bxt8s"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.634974    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-321903\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="676b2f286830ba93b13188fe018ad23f" pod="kube-system/kube-controller-manager-pause-321903"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.635286    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-321903\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="817ca23bfd256fb34800f207379338f6" pod="kube-system/kube-apiserver-pause-321903"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.635585    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-321903\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a3ead399c9d5e599281fbbf31ce37802" pod="kube-system/etcd-pause-321903"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.635893    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-321903\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d67c068537ef56c5aeb9ec4b71ea396f" pod="kube-system/kube-scheduler-pause-321903"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.636221    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-h5sxp\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="5909bf09-6c59-4f6e-859c-ac6a5c0792f9" pod="kube-system/kindnet-h5sxp"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.636522    1307 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6ntpd\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="34df88c7-c080-4704-a7cd-012f263ce7b9" pod="kube-system/kube-proxy-6ntpd"
	Oct 18 18:16:52 pause-321903 kubelet[1307]: E1018 18:16:52.875099    1307 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.168.85.2:8443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-pause-321903.186fa8a4bd0c8356  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-pause-321903,UID:d67c068537ef56c5aeb9ec4b71ea396f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:pause-321903,},FirstTimestamp:2025-10-18 18:16:44.82497007 +0000 UTC m=+62.719894291,LastTimestamp:2025-10-18 18:16:44.82497007 +0000 UTC m=+62.719894291,Count:1,Type:Warning,EventTime:0001-01-01
00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:pause-321903,}"
	Oct 18 18:16:59 pause-321903 kubelet[1307]: E1018 18:16:59.911278    1307 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-bxt8s\" is forbidden: User \"system:node:pause-321903\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-321903' and this object" podUID="0fbf5bcd-ba89-4603-85bf-895e985ed0cb" pod="kube-system/coredns-66bc5c9577-bxt8s"
	Oct 18 18:16:59 pause-321903 kubelet[1307]: E1018 18:16:59.921973    1307 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-321903\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-321903' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 18 18:16:59 pause-321903 kubelet[1307]: E1018 18:16:59.922177    1307 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-321903\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-321903' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 18 18:16:59 pause-321903 kubelet[1307]: E1018 18:16:59.922440    1307 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-321903\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-321903' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 18 18:17:00 pause-321903 kubelet[1307]: E1018 18:17:00.210438    1307 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-321903\" is forbidden: User \"system:node:pause-321903\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-321903' and this object" podUID="676b2f286830ba93b13188fe018ad23f" pod="kube-system/kube-controller-manager-pause-321903"
	Oct 18 18:17:02 pause-321903 kubelet[1307]: W1018 18:17:02.571440    1307 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 18 18:17:11 pause-321903 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 18:17:11 pause-321903 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 18:17:11 pause-321903 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-321903 -n pause-321903
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-321903 -n pause-321903: exit status 2 (438.503087ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-321903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-918475 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-918475 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (261.042494ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:19:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-918475 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-918475 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-918475 describe deploy/metrics-server -n kube-system: exit status 1 (85.619075ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-918475 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-918475
helpers_test.go:243: (dbg) docker inspect old-k8s-version-918475:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6",
	        "Created": "2025-10-18T18:18:25.775142041Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 190676,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T18:18:25.844057468Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6/hostname",
	        "HostsPath": "/var/lib/docker/containers/13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6/hosts",
	        "LogPath": "/var/lib/docker/containers/13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6/13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6-json.log",
	        "Name": "/old-k8s-version-918475",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-918475:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-918475",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6",
	                "LowerDir": "/var/lib/docker/overlay2/3cbaaca74a96e66e2281894a8ded9a8b4932ecc5b1eaa08dd2c608cf2a8fb5aa-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3cbaaca74a96e66e2281894a8ded9a8b4932ecc5b1eaa08dd2c608cf2a8fb5aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3cbaaca74a96e66e2281894a8ded9a8b4932ecc5b1eaa08dd2c608cf2a8fb5aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3cbaaca74a96e66e2281894a8ded9a8b4932ecc5b1eaa08dd2c608cf2a8fb5aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-918475",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-918475/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-918475",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-918475",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-918475",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "13f5e67e5d9e6b74f7e82ea4b09a0b5b1bf09e7bb424a97ad6995b326d741f2d",
	            "SandboxKey": "/var/run/docker/netns/13f5e67e5d9e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33043"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33044"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33047"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33045"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33046"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-918475": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:cf:c8:11:a9:d2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2f21c2763fceb3911d220e045f4c363e42b3b9b9b29b62d56c07c23b82cc830b",
	                    "EndpointID": "c753b4ab14f36ff04201832acd66b6ba3993de92b22946d625d423ab80b2304a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-918475",
	                        "13ab62783a42"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-918475 -n old-k8s-version-918475
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-918475 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-918475 logs -n 25: (1.193400363s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-111074 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo containerd config dump                                                                                                                                                                                                  │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo crio config                                                                                                                                                                                                             │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ delete  │ -p cilium-111074                                                                                                                                                                                                                              │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │ 18 Oct 25 18:16 UTC │
	│ start   │ -p force-systemd-env-785999 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-785999 │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │ 18 Oct 25 18:17 UTC │
	│ pause   │ -p pause-321903 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-321903             │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │                     │
	│ delete  │ -p pause-321903                                                                                                                                                                                                                               │ pause-321903             │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │ 18 Oct 25 18:17 UTC │
	│ start   │ -p cert-expiration-463770 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-463770   │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │ 18 Oct 25 18:18 UTC │
	│ delete  │ -p force-systemd-env-785999                                                                                                                                                                                                                   │ force-systemd-env-785999 │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │ 18 Oct 25 18:17 UTC │
	│ start   │ -p cert-options-327418 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-327418      │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │ 18 Oct 25 18:18 UTC │
	│ ssh     │ cert-options-327418 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-327418      │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:18 UTC │
	│ ssh     │ -p cert-options-327418 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-327418      │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:18 UTC │
	│ delete  │ -p cert-options-327418                                                                                                                                                                                                                        │ cert-options-327418      │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:18 UTC │
	│ start   │ -p old-k8s-version-918475 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-918475   │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:19 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-918475 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-918475   │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 18:18:19
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 18:18:19.672100  190297 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:18:19.672222  190297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:18:19.672234  190297 out.go:374] Setting ErrFile to fd 2...
	I1018 18:18:19.672239  190297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:18:19.672488  190297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:18:19.672932  190297 out.go:368] Setting JSON to false
	I1018 18:18:19.673882  190297 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7249,"bootTime":1760804251,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 18:18:19.673955  190297 start.go:141] virtualization:  
	I1018 18:18:19.677552  190297 out.go:179] * [old-k8s-version-918475] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 18:18:19.681939  190297 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 18:18:19.682101  190297 notify.go:220] Checking for updates...
	I1018 18:18:19.688499  190297 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 18:18:19.691679  190297 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:18:19.694895  190297 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 18:18:19.698076  190297 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 18:18:19.701027  190297 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 18:18:19.704607  190297 config.go:182] Loaded profile config "cert-expiration-463770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:18:19.704736  190297 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 18:18:19.737070  190297 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 18:18:19.737196  190297 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:18:19.800289  190297 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 18:18:19.790676243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:18:19.800409  190297 docker.go:318] overlay module found
	I1018 18:18:19.804718  190297 out.go:179] * Using the docker driver based on user configuration
	I1018 18:18:19.807752  190297 start.go:305] selected driver: docker
	I1018 18:18:19.807775  190297 start.go:925] validating driver "docker" against <nil>
	I1018 18:18:19.807804  190297 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 18:18:19.808525  190297 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:18:19.863450  190297 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 18:18:19.852801035 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:18:19.863619  190297 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 18:18:19.863884  190297 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:18:19.867051  190297 out.go:179] * Using Docker driver with root privileges
	I1018 18:18:19.870044  190297 cni.go:84] Creating CNI manager for ""
	I1018 18:18:19.870123  190297 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:18:19.870139  190297 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 18:18:19.870220  190297 start.go:349] cluster config:
	{Name:old-k8s-version-918475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-918475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:18:19.873431  190297 out.go:179] * Starting "old-k8s-version-918475" primary control-plane node in "old-k8s-version-918475" cluster
	I1018 18:18:19.876284  190297 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 18:18:19.879305  190297 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 18:18:19.882157  190297 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 18:18:19.882211  190297 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1018 18:18:19.882225  190297 cache.go:58] Caching tarball of preloaded images
	I1018 18:18:19.882256  190297 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 18:18:19.882314  190297 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 18:18:19.882324  190297 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1018 18:18:19.882439  190297 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/config.json ...
	I1018 18:18:19.882486  190297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/config.json: {Name:mk6d6ec1b5cd0665efc6f035487fe19c6953348b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:18:19.901513  190297 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 18:18:19.901535  190297 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 18:18:19.901548  190297 cache.go:232] Successfully downloaded all kic artifacts
	I1018 18:18:19.901571  190297 start.go:360] acquireMachinesLock for old-k8s-version-918475: {Name:mke4efc3cc1fc03dd6efc3fd3e060d8181392707 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:18:19.901679  190297 start.go:364] duration metric: took 84.317µs to acquireMachinesLock for "old-k8s-version-918475"
	I1018 18:18:19.901710  190297 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-918475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-918475 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 18:18:19.901782  190297 start.go:125] createHost starting for "" (driver="docker")
	I1018 18:18:19.905145  190297 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 18:18:19.905376  190297 start.go:159] libmachine.API.Create for "old-k8s-version-918475" (driver="docker")
	I1018 18:18:19.905420  190297 client.go:168] LocalClient.Create starting
	I1018 18:18:19.905499  190297 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem
	I1018 18:18:19.905539  190297 main.go:141] libmachine: Decoding PEM data...
	I1018 18:18:19.905560  190297 main.go:141] libmachine: Parsing certificate...
	I1018 18:18:19.905617  190297 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem
	I1018 18:18:19.905638  190297 main.go:141] libmachine: Decoding PEM data...
	I1018 18:18:19.905658  190297 main.go:141] libmachine: Parsing certificate...
	I1018 18:18:19.906014  190297 cli_runner.go:164] Run: docker network inspect old-k8s-version-918475 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 18:18:19.921611  190297 cli_runner.go:211] docker network inspect old-k8s-version-918475 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 18:18:19.921705  190297 network_create.go:284] running [docker network inspect old-k8s-version-918475] to gather additional debugging logs...
	I1018 18:18:19.921728  190297 cli_runner.go:164] Run: docker network inspect old-k8s-version-918475
	W1018 18:18:19.937009  190297 cli_runner.go:211] docker network inspect old-k8s-version-918475 returned with exit code 1
	I1018 18:18:19.937037  190297 network_create.go:287] error running [docker network inspect old-k8s-version-918475]: docker network inspect old-k8s-version-918475: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-918475 not found
	I1018 18:18:19.937051  190297 network_create.go:289] output of [docker network inspect old-k8s-version-918475]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-918475 not found
	
	** /stderr **
	I1018 18:18:19.937144  190297 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:18:19.954620  190297 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-903568cdf824 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:7a:80:c0:8c:ed} reservation:<nil>}
	I1018 18:18:19.954993  190297 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ee9fcaab9ca8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:a7:65:1b:c0:41} reservation:<nil>}
	I1018 18:18:19.955316  190297 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-414fc11e154b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:86:f0:a8:1a:86:00} reservation:<nil>}
	I1018 18:18:19.955708  190297 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ed220}
	I1018 18:18:19.955736  190297 network_create.go:124] attempt to create docker network old-k8s-version-918475 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1018 18:18:19.955799  190297 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-918475 old-k8s-version-918475
	I1018 18:18:20.037800  190297 network_create.go:108] docker network old-k8s-version-918475 192.168.76.0/24 created
	I1018 18:18:20.037835  190297 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-918475" container
	I1018 18:18:20.037911  190297 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 18:18:20.055331  190297 cli_runner.go:164] Run: docker volume create old-k8s-version-918475 --label name.minikube.sigs.k8s.io=old-k8s-version-918475 --label created_by.minikube.sigs.k8s.io=true
	I1018 18:18:20.075085  190297 oci.go:103] Successfully created a docker volume old-k8s-version-918475
	I1018 18:18:20.075178  190297 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-918475-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-918475 --entrypoint /usr/bin/test -v old-k8s-version-918475:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 18:18:20.640304  190297 oci.go:107] Successfully prepared a docker volume old-k8s-version-918475
	I1018 18:18:20.640351  190297 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 18:18:20.640377  190297 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 18:18:20.640449  190297 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-918475:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 18:18:25.702630  190297 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-918475:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.062142066s)
	I1018 18:18:25.702660  190297 kic.go:203] duration metric: took 5.062281505s to extract preloaded images to volume ...
	W1018 18:18:25.702815  190297 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 18:18:25.702934  190297 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 18:18:25.760345  190297 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-918475 --name old-k8s-version-918475 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-918475 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-918475 --network old-k8s-version-918475 --ip 192.168.76.2 --volume old-k8s-version-918475:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 18:18:26.065440  190297 cli_runner.go:164] Run: docker container inspect old-k8s-version-918475 --format={{.State.Running}}
	I1018 18:18:26.088635  190297 cli_runner.go:164] Run: docker container inspect old-k8s-version-918475 --format={{.State.Status}}
	I1018 18:18:26.113984  190297 cli_runner.go:164] Run: docker exec old-k8s-version-918475 stat /var/lib/dpkg/alternatives/iptables
	I1018 18:18:26.173837  190297 oci.go:144] the created container "old-k8s-version-918475" has a running status.
	I1018 18:18:26.173874  190297 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa...
	I1018 18:18:27.081441  190297 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 18:18:27.100688  190297 cli_runner.go:164] Run: docker container inspect old-k8s-version-918475 --format={{.State.Status}}
	I1018 18:18:27.117665  190297 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 18:18:27.117688  190297 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-918475 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 18:18:27.158345  190297 cli_runner.go:164] Run: docker container inspect old-k8s-version-918475 --format={{.State.Status}}
	I1018 18:18:27.184349  190297 machine.go:93] provisionDockerMachine start ...
	I1018 18:18:27.184436  190297 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:18:27.202912  190297 main.go:141] libmachine: Using SSH client type: native
	I1018 18:18:27.203365  190297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1018 18:18:27.203379  190297 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 18:18:27.204016  190297 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52848->127.0.0.1:33043: read: connection reset by peer
	I1018 18:18:30.360525  190297 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-918475
	
	I1018 18:18:30.360551  190297 ubuntu.go:182] provisioning hostname "old-k8s-version-918475"
	I1018 18:18:30.360623  190297 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:18:30.379459  190297 main.go:141] libmachine: Using SSH client type: native
	I1018 18:18:30.379762  190297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1018 18:18:30.379780  190297 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-918475 && echo "old-k8s-version-918475" | sudo tee /etc/hostname
	I1018 18:18:30.542199  190297 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-918475
	
	I1018 18:18:30.542298  190297 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:18:30.559565  190297 main.go:141] libmachine: Using SSH client type: native
	I1018 18:18:30.559884  190297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1018 18:18:30.559908  190297 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-918475' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-918475/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-918475' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 18:18:30.711608  190297 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 18:18:30.711637  190297 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 18:18:30.711663  190297 ubuntu.go:190] setting up certificates
	I1018 18:18:30.711673  190297 provision.go:84] configureAuth start
	I1018 18:18:30.711734  190297 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-918475
	I1018 18:18:30.729241  190297 provision.go:143] copyHostCerts
	I1018 18:18:30.729337  190297 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 18:18:30.729353  190297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 18:18:30.729434  190297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 18:18:30.729551  190297 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 18:18:30.729564  190297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 18:18:30.729592  190297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 18:18:30.729659  190297 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 18:18:30.729669  190297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 18:18:30.729699  190297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 18:18:30.729757  190297 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-918475 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-918475]
	I1018 18:18:30.837790  190297 provision.go:177] copyRemoteCerts
	I1018 18:18:30.837881  190297 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 18:18:30.837947  190297 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:18:30.855016  190297 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa Username:docker}
	I1018 18:18:30.956263  190297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 18:18:30.973105  190297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1018 18:18:30.990833  190297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 18:18:31.012110  190297 provision.go:87] duration metric: took 300.412437ms to configureAuth
	I1018 18:18:31.012136  190297 ubuntu.go:206] setting minikube options for container-runtime
	I1018 18:18:31.012326  190297 config.go:182] Loaded profile config "old-k8s-version-918475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 18:18:31.012439  190297 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:18:31.030040  190297 main.go:141] libmachine: Using SSH client type: native
	I1018 18:18:31.030392  190297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1018 18:18:31.030414  190297 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 18:18:31.295780  190297 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 18:18:31.295806  190297 machine.go:96] duration metric: took 4.111438646s to provisionDockerMachine
	I1018 18:18:31.295817  190297 client.go:171] duration metric: took 11.390385273s to LocalClient.Create
	I1018 18:18:31.295832  190297 start.go:167] duration metric: took 11.390457373s to libmachine.API.Create "old-k8s-version-918475"
	I1018 18:18:31.295839  190297 start.go:293] postStartSetup for "old-k8s-version-918475" (driver="docker")
	I1018 18:18:31.295849  190297 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 18:18:31.295917  190297 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 18:18:31.295963  190297 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:18:31.320764  190297 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa Username:docker}
	I1018 18:18:31.424738  190297 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 18:18:31.427882  190297 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 18:18:31.427912  190297 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 18:18:31.427923  190297 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 18:18:31.427983  190297 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 18:18:31.428075  190297 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 18:18:31.428181  190297 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 18:18:31.435671  190297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:18:31.455402  190297 start.go:296] duration metric: took 159.548216ms for postStartSetup
	I1018 18:18:31.455756  190297 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-918475
	I1018 18:18:31.472148  190297 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/config.json ...
	I1018 18:18:31.472427  190297 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 18:18:31.472476  190297 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:18:31.491258  190297 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa Username:docker}
	I1018 18:18:31.589926  190297 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 18:18:31.594628  190297 start.go:128] duration metric: took 11.6928315s to createHost
	I1018 18:18:31.594651  190297 start.go:83] releasing machines lock for "old-k8s-version-918475", held for 11.692959033s
	I1018 18:18:31.594719  190297 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-918475
	I1018 18:18:31.611568  190297 ssh_runner.go:195] Run: cat /version.json
	I1018 18:18:31.611641  190297 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:18:31.611949  190297 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 18:18:31.612034  190297 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:18:31.628100  190297 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa Username:docker}
	I1018 18:18:31.636690  190297 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa Username:docker}
	I1018 18:18:31.736691  190297 ssh_runner.go:195] Run: systemctl --version
	I1018 18:18:31.829382  190297 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 18:18:31.866760  190297 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 18:18:31.871324  190297 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 18:18:31.871396  190297 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 18:18:31.902831  190297 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 18:18:31.902902  190297 start.go:495] detecting cgroup driver to use...
	I1018 18:18:31.902947  190297 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 18:18:31.903026  190297 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 18:18:31.920245  190297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 18:18:31.932702  190297 docker.go:218] disabling cri-docker service (if available) ...
	I1018 18:18:31.932811  190297 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 18:18:31.951033  190297 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 18:18:31.970942  190297 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 18:18:32.097052  190297 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 18:18:32.234351  190297 docker.go:234] disabling docker service ...
	I1018 18:18:32.234435  190297 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 18:18:32.260081  190297 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 18:18:32.275026  190297 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 18:18:32.406571  190297 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 18:18:32.530075  190297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 18:18:32.544070  190297 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 18:18:32.559326  190297 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1018 18:18:32.559401  190297 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:18:32.568539  190297 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 18:18:32.568604  190297 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:18:32.578253  190297 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:18:32.589299  190297 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:18:32.598736  190297 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 18:18:32.606631  190297 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:18:32.615621  190297 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:18:32.629687  190297 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:18:32.638597  190297 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 18:18:32.646746  190297 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 18:18:32.654943  190297 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:18:32.784304  190297 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 18:18:32.914959  190297 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 18:18:32.915096  190297 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 18:18:32.918953  190297 start.go:563] Will wait 60s for crictl version
	I1018 18:18:32.919067  190297 ssh_runner.go:195] Run: which crictl
	I1018 18:18:32.922607  190297 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 18:18:32.950032  190297 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 18:18:32.950194  190297 ssh_runner.go:195] Run: crio --version
	I1018 18:18:32.985300  190297 ssh_runner.go:195] Run: crio --version
	I1018 18:18:33.027035  190297 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1018 18:18:33.029903  190297 cli_runner.go:164] Run: docker network inspect old-k8s-version-918475 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:18:33.051887  190297 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 18:18:33.056537  190297 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:18:33.067815  190297 kubeadm.go:883] updating cluster {Name:old-k8s-version-918475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-918475 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 18:18:33.067928  190297 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 18:18:33.067985  190297 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:18:33.108161  190297 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:18:33.108181  190297 crio.go:433] Images already preloaded, skipping extraction
	I1018 18:18:33.108236  190297 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:18:33.133485  190297 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:18:33.133561  190297 cache_images.go:85] Images are preloaded, skipping loading
	I1018 18:18:33.133584  190297 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1018 18:18:33.133706  190297 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-918475 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-918475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 18:18:33.133821  190297 ssh_runner.go:195] Run: crio config
	I1018 18:18:33.186864  190297 cni.go:84] Creating CNI manager for ""
	I1018 18:18:33.186895  190297 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:18:33.186912  190297 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 18:18:33.186934  190297 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-918475 NodeName:old-k8s-version-918475 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 18:18:33.187100  190297 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-918475"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 18:18:33.187180  190297 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1018 18:18:33.194890  190297 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 18:18:33.194987  190297 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 18:18:33.202394  190297 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1018 18:18:33.214707  190297 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 18:18:33.227796  190297 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1018 18:18:33.240251  190297 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 18:18:33.243945  190297 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:18:33.254148  190297 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:18:33.382545  190297 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:18:33.398849  190297 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475 for IP: 192.168.76.2
	I1018 18:18:33.398872  190297 certs.go:195] generating shared ca certs ...
	I1018 18:18:33.398889  190297 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:18:33.399023  190297 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 18:18:33.399073  190297 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 18:18:33.399084  190297 certs.go:257] generating profile certs ...
	I1018 18:18:33.399140  190297 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/client.key
	I1018 18:18:33.399166  190297 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/client.crt with IP's: []
	I1018 18:18:33.754208  190297 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/client.crt ...
	I1018 18:18:33.754238  190297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/client.crt: {Name:mk86784e0ee415564b7e8aa5b9fa47c8bd0c8e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:18:33.754439  190297 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/client.key ...
	I1018 18:18:33.754455  190297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/client.key: {Name:mkc3c0d24ad2658586e9c2406eb626c731052cb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:18:33.754545  190297 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/apiserver.key.630d08a5
	I1018 18:18:33.754563  190297 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/apiserver.crt.630d08a5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1018 18:18:35.220251  190297 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/apiserver.crt.630d08a5 ...
	I1018 18:18:35.220282  190297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/apiserver.crt.630d08a5: {Name:mk1f2f125c5e13604be432f776814f8f3aecdd6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:18:35.220473  190297 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/apiserver.key.630d08a5 ...
	I1018 18:18:35.220488  190297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/apiserver.key.630d08a5: {Name:mk3ac4e5baaa9ba6d1fe53e38151e1c7a8d78318 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:18:35.220574  190297 certs.go:382] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/apiserver.crt.630d08a5 -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/apiserver.crt
	I1018 18:18:35.220674  190297 certs.go:386] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/apiserver.key.630d08a5 -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/apiserver.key
	I1018 18:18:35.220742  190297 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/proxy-client.key
	I1018 18:18:35.220759  190297 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/proxy-client.crt with IP's: []
	I1018 18:18:35.625295  190297 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/proxy-client.crt ...
	I1018 18:18:35.625323  190297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/proxy-client.crt: {Name:mkbb3d08f62b8312de90af5beb86dbdd62665ae4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:18:35.625513  190297 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/proxy-client.key ...
	I1018 18:18:35.625527  190297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/proxy-client.key: {Name:mk01e8ff0ed6a7700af6d4dab2ed51964a07627b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:18:35.625723  190297 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 18:18:35.625771  190297 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 18:18:35.625783  190297 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 18:18:35.625811  190297 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 18:18:35.625838  190297 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 18:18:35.625862  190297 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 18:18:35.625909  190297 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:18:35.626604  190297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 18:18:35.650015  190297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 18:18:35.669325  190297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 18:18:35.689201  190297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 18:18:35.707228  190297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 18:18:35.726236  190297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 18:18:35.753391  190297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 18:18:35.774307  190297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 18:18:35.796168  190297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 18:18:35.817377  190297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 18:18:35.839998  190297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 18:18:35.860425  190297 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 18:18:35.873652  190297 ssh_runner.go:195] Run: openssl version
	I1018 18:18:35.879955  190297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 18:18:35.888843  190297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:18:35.892630  190297 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:18:35.892710  190297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:18:35.934983  190297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 18:18:35.943578  190297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 18:18:35.952118  190297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 18:18:35.955950  190297 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 18:18:35.956031  190297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 18:18:35.997215  190297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 18:18:36.007722  190297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 18:18:36.017167  190297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 18:18:36.021646  190297 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 18:18:36.021728  190297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 18:18:36.065102  190297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 18:18:36.074606  190297 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 18:18:36.078518  190297 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 18:18:36.078581  190297 kubeadm.go:400] StartCluster: {Name:old-k8s-version-918475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-918475 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:18:36.078661  190297 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 18:18:36.078730  190297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 18:18:36.106926  190297 cri.go:89] found id: ""
	I1018 18:18:36.107054  190297 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 18:18:36.115231  190297 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 18:18:36.123313  190297 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 18:18:36.123390  190297 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 18:18:36.131827  190297 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 18:18:36.131847  190297 kubeadm.go:157] found existing configuration files:
	
	I1018 18:18:36.131907  190297 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 18:18:36.139859  190297 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 18:18:36.139931  190297 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 18:18:36.148574  190297 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 18:18:36.156889  190297 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 18:18:36.157025  190297 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 18:18:36.164582  190297 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 18:18:36.172840  190297 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 18:18:36.172975  190297 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 18:18:36.180540  190297 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 18:18:36.188420  190297 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 18:18:36.188534  190297 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 18:18:36.196120  190297 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 18:18:36.290097  190297 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 18:18:36.392634  190297 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 18:18:54.407447  190297 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1018 18:18:54.407512  190297 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 18:18:54.407599  190297 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 18:18:54.407655  190297 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 18:18:54.407689  190297 kubeadm.go:318] OS: Linux
	I1018 18:18:54.407733  190297 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 18:18:54.407781  190297 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 18:18:54.407828  190297 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 18:18:54.407876  190297 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 18:18:54.407923  190297 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 18:18:54.407974  190297 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 18:18:54.408019  190297 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 18:18:54.408067  190297 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 18:18:54.408112  190297 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 18:18:54.408184  190297 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 18:18:54.408287  190297 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 18:18:54.408379  190297 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1018 18:18:54.408441  190297 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 18:18:54.411460  190297 out.go:252]   - Generating certificates and keys ...
	I1018 18:18:54.411623  190297 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 18:18:54.411734  190297 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 18:18:54.411843  190297 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 18:18:54.411925  190297 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 18:18:54.412017  190297 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 18:18:54.412115  190297 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 18:18:54.412210  190297 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 18:18:54.412413  190297 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-918475] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 18:18:54.412473  190297 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 18:18:54.412605  190297 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-918475] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 18:18:54.412697  190297 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 18:18:54.412764  190297 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 18:18:54.412811  190297 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 18:18:54.412870  190297 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 18:18:54.412924  190297 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 18:18:54.413009  190297 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 18:18:54.413082  190297 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 18:18:54.413139  190297 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 18:18:54.413225  190297 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 18:18:54.413295  190297 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 18:18:54.418212  190297 out.go:252]   - Booting up control plane ...
	I1018 18:18:54.418340  190297 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 18:18:54.418421  190297 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 18:18:54.418490  190297 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 18:18:54.418598  190297 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 18:18:54.418686  190297 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 18:18:54.418727  190297 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 18:18:54.418892  190297 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1018 18:18:54.418973  190297 kubeadm.go:318] [apiclient] All control plane components are healthy after 8.503291 seconds
	I1018 18:18:54.419084  190297 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 18:18:54.419218  190297 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 18:18:54.419279  190297 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 18:18:54.419491  190297 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-918475 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 18:18:54.419550  190297 kubeadm.go:318] [bootstrap-token] Using token: nunffk.2scakycsxhkryxoy
	I1018 18:18:54.421771  190297 out.go:252]   - Configuring RBAC rules ...
	I1018 18:18:54.421955  190297 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 18:18:54.422080  190297 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 18:18:54.422266  190297 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 18:18:54.422442  190297 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 18:18:54.422603  190297 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 18:18:54.422749  190297 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 18:18:54.422914  190297 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 18:18:54.422984  190297 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 18:18:54.423061  190297 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 18:18:54.423086  190297 kubeadm.go:318] 
	I1018 18:18:54.423179  190297 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 18:18:54.423202  190297 kubeadm.go:318] 
	I1018 18:18:54.423326  190297 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 18:18:54.423354  190297 kubeadm.go:318] 
	I1018 18:18:54.423397  190297 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 18:18:54.423489  190297 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 18:18:54.423595  190297 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 18:18:54.423617  190297 kubeadm.go:318] 
	I1018 18:18:54.423711  190297 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 18:18:54.423737  190297 kubeadm.go:318] 
	I1018 18:18:54.423822  190297 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 18:18:54.423843  190297 kubeadm.go:318] 
	I1018 18:18:54.423924  190297 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 18:18:54.424033  190297 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 18:18:54.424132  190297 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 18:18:54.424155  190297 kubeadm.go:318] 
	I1018 18:18:54.424270  190297 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 18:18:54.424378  190297 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 18:18:54.424400  190297 kubeadm.go:318] 
	I1018 18:18:54.424513  190297 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token nunffk.2scakycsxhkryxoy \
	I1018 18:18:54.424667  190297 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d0244c5bf86cdf97546c6a22045cb6ed9d7ead524d9c98d9ca35da77d5d7a04d \
	I1018 18:18:54.424710  190297 kubeadm.go:318] 	--control-plane 
	I1018 18:18:54.424731  190297 kubeadm.go:318] 
	I1018 18:18:54.424846  190297 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 18:18:54.424868  190297 kubeadm.go:318] 
	I1018 18:18:54.425020  190297 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token nunffk.2scakycsxhkryxoy \
	I1018 18:18:54.425175  190297 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d0244c5bf86cdf97546c6a22045cb6ed9d7ead524d9c98d9ca35da77d5d7a04d 
	I1018 18:18:54.425201  190297 cni.go:84] Creating CNI manager for ""
	I1018 18:18:54.425219  190297 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:18:54.430176  190297 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 18:18:54.433018  190297 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 18:18:54.448343  190297 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1018 18:18:54.448365  190297 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 18:18:54.467532  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 18:18:55.437574  190297 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 18:18:55.437711  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-918475 minikube.k8s.io/updated_at=2025_10_18T18_18_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404 minikube.k8s.io/name=old-k8s-version-918475 minikube.k8s.io/primary=true
	I1018 18:18:55.437747  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:18:55.453433  190297 ops.go:34] apiserver oom_adj: -16
	I1018 18:18:55.571692  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:18:56.072004  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:18:56.572621  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:18:57.071737  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:18:57.571789  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:18:58.071978  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:18:58.571821  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:18:59.071841  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:18:59.571856  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:19:00.088915  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:19:00.572504  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:19:01.072535  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:19:01.572010  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:19:02.072084  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:19:02.571925  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:19:03.072348  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:19:03.572780  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:19:04.072211  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:19:04.571848  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:19:05.072536  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:19:05.572144  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:19:06.072677  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:19:06.571778  190297 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:19:06.680045  190297 kubeadm.go:1113] duration metric: took 11.242391384s to wait for elevateKubeSystemPrivileges
	I1018 18:19:06.680079  190297 kubeadm.go:402] duration metric: took 30.601505005s to StartCluster
	I1018 18:19:06.680097  190297 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:19:06.680165  190297 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:19:06.681256  190297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:19:06.681516  190297 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 18:19:06.681514  190297 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 18:19:06.681780  190297 config.go:182] Loaded profile config "old-k8s-version-918475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 18:19:06.681814  190297 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 18:19:06.681874  190297 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-918475"
	I1018 18:19:06.681895  190297 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-918475"
	I1018 18:19:06.681916  190297 host.go:66] Checking if "old-k8s-version-918475" exists ...
	I1018 18:19:06.682367  190297 cli_runner.go:164] Run: docker container inspect old-k8s-version-918475 --format={{.State.Status}}
	I1018 18:19:06.682958  190297 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-918475"
	I1018 18:19:06.682982  190297 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-918475"
	I1018 18:19:06.683289  190297 cli_runner.go:164] Run: docker container inspect old-k8s-version-918475 --format={{.State.Status}}
	I1018 18:19:06.685706  190297 out.go:179] * Verifying Kubernetes components...
	I1018 18:19:06.690534  190297 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:19:06.722842  190297 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:19:06.725649  190297 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:19:06.725670  190297 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 18:19:06.725737  190297 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:06.727234  190297 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-918475"
	I1018 18:19:06.727265  190297 host.go:66] Checking if "old-k8s-version-918475" exists ...
	I1018 18:19:06.727678  190297 cli_runner.go:164] Run: docker container inspect old-k8s-version-918475 --format={{.State.Status}}
	I1018 18:19:06.760186  190297 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa Username:docker}
	I1018 18:19:06.761451  190297 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 18:19:06.761467  190297 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 18:19:06.761520  190297 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:06.798764  190297 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa Username:docker}
	I1018 18:19:06.972167  190297 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 18:19:07.013648  190297 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:19:07.020967  190297 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:19:07.077256  190297 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 18:19:07.917053  190297 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-918475" to be "Ready" ...
	I1018 18:19:07.917899  190297 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1018 18:19:08.310117  190297 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.232819283s)
	I1018 18:19:08.310761  190297 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.289719108s)
	I1018 18:19:08.324212  190297 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 18:19:08.327162  190297 addons.go:514] duration metric: took 1.645325451s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 18:19:08.430914  190297 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-918475" context rescaled to 1 replicas
	W1018 18:19:09.920424  190297 node_ready.go:57] node "old-k8s-version-918475" has "Ready":"False" status (will retry)
	W1018 18:19:11.920902  190297 node_ready.go:57] node "old-k8s-version-918475" has "Ready":"False" status (will retry)
	W1018 18:19:14.424727  190297 node_ready.go:57] node "old-k8s-version-918475" has "Ready":"False" status (will retry)
	W1018 18:19:16.425961  190297 node_ready.go:57] node "old-k8s-version-918475" has "Ready":"False" status (will retry)
	W1018 18:19:18.919775  190297 node_ready.go:57] node "old-k8s-version-918475" has "Ready":"False" status (will retry)
	W1018 18:19:20.920358  190297 node_ready.go:57] node "old-k8s-version-918475" has "Ready":"False" status (will retry)
	I1018 18:19:21.433918  190297 node_ready.go:49] node "old-k8s-version-918475" is "Ready"
	I1018 18:19:21.433944  190297 node_ready.go:38] duration metric: took 13.516857146s for node "old-k8s-version-918475" to be "Ready" ...
	I1018 18:19:21.433957  190297 api_server.go:52] waiting for apiserver process to appear ...
	I1018 18:19:21.434032  190297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 18:19:21.453828  190297 api_server.go:72] duration metric: took 14.772285159s to wait for apiserver process to appear ...
	I1018 18:19:21.453849  190297 api_server.go:88] waiting for apiserver healthz status ...
	I1018 18:19:21.453870  190297 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 18:19:21.462720  190297 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 18:19:21.464269  190297 api_server.go:141] control plane version: v1.28.0
	I1018 18:19:21.464296  190297 api_server.go:131] duration metric: took 10.439665ms to wait for apiserver health ...
	I1018 18:19:21.464305  190297 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 18:19:21.472053  190297 system_pods.go:59] 8 kube-system pods found
	I1018 18:19:21.472091  190297 system_pods.go:61] "coredns-5dd5756b68-kd9bz" [db934def-c206-49f5-93c1-5e9e72029aea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:19:21.472099  190297 system_pods.go:61] "etcd-old-k8s-version-918475" [52e60769-ce25-4039-9816-8eee5939547b] Running
	I1018 18:19:21.472105  190297 system_pods.go:61] "kindnet-l8wgz" [1ce8f8fe-9578-4405-b71b-8dbb34c91ff8] Running
	I1018 18:19:21.472110  190297 system_pods.go:61] "kube-apiserver-old-k8s-version-918475" [bb13f0ff-7082-4594-b7a9-082fae97e8b1] Running
	I1018 18:19:21.472115  190297 system_pods.go:61] "kube-controller-manager-old-k8s-version-918475" [11c22b96-b426-4049-b453-30869431916f] Running
	I1018 18:19:21.472119  190297 system_pods.go:61] "kube-proxy-776dm" [8dc0388f-47c7-46e9-9f05-4815ce812559] Running
	I1018 18:19:21.472124  190297 system_pods.go:61] "kube-scheduler-old-k8s-version-918475" [b2f9fdec-0d90-4575-a638-f9ed0457ae29] Running
	I1018 18:19:21.472130  190297 system_pods.go:61] "storage-provisioner" [486aafde-9949-4760-8b48-d58682b50726] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 18:19:21.472136  190297 system_pods.go:74] duration metric: took 7.825659ms to wait for pod list to return data ...
	I1018 18:19:21.472148  190297 default_sa.go:34] waiting for default service account to be created ...
	I1018 18:19:21.476045  190297 default_sa.go:45] found service account: "default"
	I1018 18:19:21.476069  190297 default_sa.go:55] duration metric: took 3.915439ms for default service account to be created ...
	I1018 18:19:21.476079  190297 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 18:19:21.480268  190297 system_pods.go:86] 8 kube-system pods found
	I1018 18:19:21.480299  190297 system_pods.go:89] "coredns-5dd5756b68-kd9bz" [db934def-c206-49f5-93c1-5e9e72029aea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:19:21.480306  190297 system_pods.go:89] "etcd-old-k8s-version-918475" [52e60769-ce25-4039-9816-8eee5939547b] Running
	I1018 18:19:21.480312  190297 system_pods.go:89] "kindnet-l8wgz" [1ce8f8fe-9578-4405-b71b-8dbb34c91ff8] Running
	I1018 18:19:21.480316  190297 system_pods.go:89] "kube-apiserver-old-k8s-version-918475" [bb13f0ff-7082-4594-b7a9-082fae97e8b1] Running
	I1018 18:19:21.480321  190297 system_pods.go:89] "kube-controller-manager-old-k8s-version-918475" [11c22b96-b426-4049-b453-30869431916f] Running
	I1018 18:19:21.480325  190297 system_pods.go:89] "kube-proxy-776dm" [8dc0388f-47c7-46e9-9f05-4815ce812559] Running
	I1018 18:19:21.480329  190297 system_pods.go:89] "kube-scheduler-old-k8s-version-918475" [b2f9fdec-0d90-4575-a638-f9ed0457ae29] Running
	I1018 18:19:21.480335  190297 system_pods.go:89] "storage-provisioner" [486aafde-9949-4760-8b48-d58682b50726] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 18:19:21.480356  190297 retry.go:31] will retry after 306.218177ms: missing components: kube-dns
	I1018 18:19:21.796525  190297 system_pods.go:86] 8 kube-system pods found
	I1018 18:19:21.796560  190297 system_pods.go:89] "coredns-5dd5756b68-kd9bz" [db934def-c206-49f5-93c1-5e9e72029aea] Running
	I1018 18:19:21.796574  190297 system_pods.go:89] "etcd-old-k8s-version-918475" [52e60769-ce25-4039-9816-8eee5939547b] Running
	I1018 18:19:21.796581  190297 system_pods.go:89] "kindnet-l8wgz" [1ce8f8fe-9578-4405-b71b-8dbb34c91ff8] Running
	I1018 18:19:21.796590  190297 system_pods.go:89] "kube-apiserver-old-k8s-version-918475" [bb13f0ff-7082-4594-b7a9-082fae97e8b1] Running
	I1018 18:19:21.796595  190297 system_pods.go:89] "kube-controller-manager-old-k8s-version-918475" [11c22b96-b426-4049-b453-30869431916f] Running
	I1018 18:19:21.796599  190297 system_pods.go:89] "kube-proxy-776dm" [8dc0388f-47c7-46e9-9f05-4815ce812559] Running
	I1018 18:19:21.796603  190297 system_pods.go:89] "kube-scheduler-old-k8s-version-918475" [b2f9fdec-0d90-4575-a638-f9ed0457ae29] Running
	I1018 18:19:21.796608  190297 system_pods.go:89] "storage-provisioner" [486aafde-9949-4760-8b48-d58682b50726] Running
	I1018 18:19:21.796615  190297 system_pods.go:126] duration metric: took 320.530961ms to wait for k8s-apps to be running ...
	I1018 18:19:21.796625  190297 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 18:19:21.796697  190297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:19:21.813005  190297 system_svc.go:56] duration metric: took 16.37172ms WaitForService to wait for kubelet
	I1018 18:19:21.813030  190297 kubeadm.go:586] duration metric: took 15.131492781s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:19:21.813049  190297 node_conditions.go:102] verifying NodePressure condition ...
	I1018 18:19:21.817639  190297 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 18:19:21.817671  190297 node_conditions.go:123] node cpu capacity is 2
	I1018 18:19:21.817686  190297 node_conditions.go:105] duration metric: took 4.630418ms to run NodePressure ...
	I1018 18:19:21.817698  190297 start.go:241] waiting for startup goroutines ...
	I1018 18:19:21.817717  190297 start.go:246] waiting for cluster config update ...
	I1018 18:19:21.817729  190297 start.go:255] writing updated cluster config ...
	I1018 18:19:21.818049  190297 ssh_runner.go:195] Run: rm -f paused
	I1018 18:19:21.822255  190297 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:19:21.826577  190297 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-kd9bz" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:19:21.832082  190297 pod_ready.go:94] pod "coredns-5dd5756b68-kd9bz" is "Ready"
	I1018 18:19:21.832104  190297 pod_ready.go:86] duration metric: took 5.504571ms for pod "coredns-5dd5756b68-kd9bz" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:19:21.835074  190297 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-918475" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:19:21.840290  190297 pod_ready.go:94] pod "etcd-old-k8s-version-918475" is "Ready"
	I1018 18:19:21.840323  190297 pod_ready.go:86] duration metric: took 5.228686ms for pod "etcd-old-k8s-version-918475" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:19:21.843928  190297 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-918475" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:19:21.849960  190297 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-918475" is "Ready"
	I1018 18:19:21.849995  190297 pod_ready.go:86] duration metric: took 6.039642ms for pod "kube-apiserver-old-k8s-version-918475" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:19:21.853030  190297 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-918475" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:19:22.226523  190297 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-918475" is "Ready"
	I1018 18:19:22.226550  190297 pod_ready.go:86] duration metric: took 373.479631ms for pod "kube-controller-manager-old-k8s-version-918475" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:19:22.428309  190297 pod_ready.go:83] waiting for pod "kube-proxy-776dm" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:19:22.826600  190297 pod_ready.go:94] pod "kube-proxy-776dm" is "Ready"
	I1018 18:19:22.826628  190297 pod_ready.go:86] duration metric: took 398.293485ms for pod "kube-proxy-776dm" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:19:23.027285  190297 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-918475" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:19:23.426947  190297 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-918475" is "Ready"
	I1018 18:19:23.426975  190297 pod_ready.go:86] duration metric: took 399.665819ms for pod "kube-scheduler-old-k8s-version-918475" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:19:23.426988  190297 pod_ready.go:40] duration metric: took 1.604703885s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:19:23.482642  190297 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1018 18:19:23.485933  190297 out.go:203] 
	W1018 18:19:23.488795  190297 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1018 18:19:23.492027  190297 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1018 18:19:23.495902  190297 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-918475" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 18:19:21 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:21.435364001Z" level=info msg="Created container f129fc1fc0050324f3427e89bd1691b6cee7e1a655069c8c01a1fbab17c220ab: kube-system/coredns-5dd5756b68-kd9bz/coredns" id=a6dee798-cc4f-4772-98af-26e72d9f65e3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:19:21 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:21.436218659Z" level=info msg="Starting container: f129fc1fc0050324f3427e89bd1691b6cee7e1a655069c8c01a1fbab17c220ab" id=f6314509-cdbb-4d61-acd2-66a043b74b10 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:19:21 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:21.451761691Z" level=info msg="Started container" PID=1943 containerID=f129fc1fc0050324f3427e89bd1691b6cee7e1a655069c8c01a1fbab17c220ab description=kube-system/coredns-5dd5756b68-kd9bz/coredns id=f6314509-cdbb-4d61-acd2-66a043b74b10 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d4bd55de9e87a6c69f8c7f45bc5d0c0d7c32633c11c8d1c62d1d9035d49676e0
	Oct 18 18:19:24 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:24.027196013Z" level=info msg="Running pod sandbox: default/busybox/POD" id=eb1a34d8-4e5a-467d-bf02-e0875d8731fc name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 18:19:24 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:24.027267514Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:19:24 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:24.041094233Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1a88fa1e2d119b75792d5b03cadfc56f50626795602ddcfb68678ebf0a4b90e1 UID:d5268bf2-03ea-4390-b3f8-efc451427c93 NetNS:/var/run/netns/3e7b0bdc-5e75-43de-9c17-a125ad96c4e6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001594448}] Aliases:map[]}"
	Oct 18 18:19:24 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:24.04114783Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 18:19:24 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:24.050726586Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1a88fa1e2d119b75792d5b03cadfc56f50626795602ddcfb68678ebf0a4b90e1 UID:d5268bf2-03ea-4390-b3f8-efc451427c93 NetNS:/var/run/netns/3e7b0bdc-5e75-43de-9c17-a125ad96c4e6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001594448}] Aliases:map[]}"
	Oct 18 18:19:24 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:24.050880787Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 18:19:24 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:24.055673832Z" level=info msg="Ran pod sandbox 1a88fa1e2d119b75792d5b03cadfc56f50626795602ddcfb68678ebf0a4b90e1 with infra container: default/busybox/POD" id=eb1a34d8-4e5a-467d-bf02-e0875d8731fc name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 18:19:24 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:24.058320208Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2b4c35d5-7c6a-41f8-ba8a-1363ca7d0e74 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:19:24 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:24.058453806Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=2b4c35d5-7c6a-41f8-ba8a-1363ca7d0e74 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:19:24 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:24.058500773Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=2b4c35d5-7c6a-41f8-ba8a-1363ca7d0e74 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:19:24 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:24.059437531Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2530b8dc-7c69-4ff7-b0e9-5fe0bef3930d name=/runtime.v1.ImageService/PullImage
	Oct 18 18:19:24 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:24.062512344Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 18:19:26 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:26.024610177Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=2530b8dc-7c69-4ff7-b0e9-5fe0bef3930d name=/runtime.v1.ImageService/PullImage
	Oct 18 18:19:26 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:26.027555699Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a42798f5-81d5-4334-b151-7a5ebd48b491 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:19:26 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:26.03015236Z" level=info msg="Creating container: default/busybox/busybox" id=032a8248-fa73-473c-b7a7-e01658182cc8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:19:26 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:26.030934417Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:19:26 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:26.035486894Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:19:26 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:26.03593302Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:19:26 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:26.059649572Z" level=info msg="Created container 56d6945073db2d2ac14da297f203222eaf315334068af802b21f94e1f5560296: default/busybox/busybox" id=032a8248-fa73-473c-b7a7-e01658182cc8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:19:26 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:26.063259981Z" level=info msg="Starting container: 56d6945073db2d2ac14da297f203222eaf315334068af802b21f94e1f5560296" id=767fb432-1cde-45c0-afe2-02cad3de2287 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:19:26 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:26.066922625Z" level=info msg="Started container" PID=2000 containerID=56d6945073db2d2ac14da297f203222eaf315334068af802b21f94e1f5560296 description=default/busybox/busybox id=767fb432-1cde-45c0-afe2-02cad3de2287 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1a88fa1e2d119b75792d5b03cadfc56f50626795602ddcfb68678ebf0a4b90e1
	Oct 18 18:19:33 old-k8s-version-918475 crio[841]: time="2025-10-18T18:19:33.90053729Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	56d6945073db2       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago       Running             busybox                   0                   1a88fa1e2d119       busybox                                          default
	f129fc1fc0050       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   d4bd55de9e87a       coredns-5dd5756b68-kd9bz                         kube-system
	cdacd624e3b27       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   850660f0f702a       storage-provisioner                              kube-system
	5a1f17acad7b8       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   93ebc65d910eb       kindnet-l8wgz                                    kube-system
	de372e3908f40       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      27 seconds ago      Running             kube-proxy                0                   ec7485968f0d0       kube-proxy-776dm                                 kube-system
	e343fbbdfcd68       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      48 seconds ago      Running             kube-scheduler            0                   cdecc4d663f47       kube-scheduler-old-k8s-version-918475            kube-system
	cf43027dd90d5       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      48 seconds ago      Running             etcd                      0                   831679d99f5de       etcd-old-k8s-version-918475                      kube-system
	77c8affe0fb5a       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      48 seconds ago      Running             kube-apiserver            0                   ad0836b819e50       kube-apiserver-old-k8s-version-918475            kube-system
	57ccd19fc28fd       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      48 seconds ago      Running             kube-controller-manager   0                   138764cbde766       kube-controller-manager-old-k8s-version-918475   kube-system
	
	
	==> coredns [f129fc1fc0050324f3427e89bd1691b6cee7e1a655069c8c01a1fbab17c220ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55796 - 55828 "HINFO IN 3949113543149431360.553088104849597794. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01550094s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-918475
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-918475
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=old-k8s-version-918475
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T18_18_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 18:18:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-918475
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 18:19:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 18:19:25 +0000   Sat, 18 Oct 2025 18:18:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 18:19:25 +0000   Sat, 18 Oct 2025 18:18:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 18:19:25 +0000   Sat, 18 Oct 2025 18:18:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 18:19:25 +0000   Sat, 18 Oct 2025 18:19:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-918475
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                f524e506-54d4-439d-bba8-8edfc5d97a5b
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-kd9bz                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-old-k8s-version-918475                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         41s
	  kube-system                 kindnet-l8wgz                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-918475             250m (12%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-old-k8s-version-918475    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-776dm                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-918475             100m (5%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node old-k8s-version-918475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node old-k8s-version-918475 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x8 over 49s)  kubelet          Node old-k8s-version-918475 status is now: NodeHasSufficientPID
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-918475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-918475 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-918475 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node old-k8s-version-918475 event: Registered Node old-k8s-version-918475 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-918475 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 17:51] overlayfs: idmapped layers are currently not supported
	[Oct18 17:53] overlayfs: idmapped layers are currently not supported
	[Oct18 17:58] overlayfs: idmapped layers are currently not supported
	[ +33.320958] overlayfs: idmapped layers are currently not supported
	[Oct18 18:00] overlayfs: idmapped layers are currently not supported
	[Oct18 18:01] overlayfs: idmapped layers are currently not supported
	[Oct18 18:02] overlayfs: idmapped layers are currently not supported
	[Oct18 18:04] overlayfs: idmapped layers are currently not supported
	[ +24.403909] overlayfs: idmapped layers are currently not supported
	[  +6.162774] overlayfs: idmapped layers are currently not supported
	[Oct18 18:05] overlayfs: idmapped layers are currently not supported
	[ +25.128760] overlayfs: idmapped layers are currently not supported
	[Oct18 18:06] overlayfs: idmapped layers are currently not supported
	[Oct18 18:07] overlayfs: idmapped layers are currently not supported
	[Oct18 18:08] overlayfs: idmapped layers are currently not supported
	[Oct18 18:09] overlayfs: idmapped layers are currently not supported
	[Oct18 18:11] overlayfs: idmapped layers are currently not supported
	[Oct18 18:13] overlayfs: idmapped layers are currently not supported
	[ +30.969240] overlayfs: idmapped layers are currently not supported
	[Oct18 18:15] overlayfs: idmapped layers are currently not supported
	[Oct18 18:16] overlayfs: idmapped layers are currently not supported
	[Oct18 18:17] overlayfs: idmapped layers are currently not supported
	[ +23.167826] overlayfs: idmapped layers are currently not supported
	[Oct18 18:18] overlayfs: idmapped layers are currently not supported
	[ +38.509809] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [cf43027dd90d568524594cff7720e648e4d8d8d582b8309a8c86573990a78cfa] <==
	{"level":"info","ts":"2025-10-18T18:18:46.750845Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T18:18:46.750907Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T18:18:46.750961Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T18:18:46.7512Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T18:18:46.751245Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T18:18:46.753397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-18T18:18:46.753562Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-18T18:18:46.927796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-18T18:18:46.927908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-18T18:18:46.927948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-10-18T18:18:46.927995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-10-18T18:18:46.928024Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-18T18:18:46.928064Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-10-18T18:18:46.928097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-18T18:18:46.931823Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-918475 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T18:18:46.932026Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T18:18:46.933797Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-18T18:18:46.933962Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T18:18:46.934097Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T18:18:46.935273Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-18T18:18:46.953961Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T18:18:46.954077Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-18T18:18:46.954165Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T18:18:46.954295Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T18:18:46.954358Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 18:19:35 up  2:02,  0 user,  load average: 2.39, 3.20, 2.73
	Linux old-k8s-version-918475 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5a1f17acad7b82c2c34af6e3056b9e11e62eb2e556445d3b6745764fc5a0be96] <==
	I1018 18:19:10.317530       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 18:19:10.318564       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 18:19:10.318737       1 main.go:148] setting mtu 1500 for CNI 
	I1018 18:19:10.318757       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 18:19:10.318772       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T18:19:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 18:19:10.606123       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 18:19:10.606162       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 18:19:10.606172       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 18:19:10.606508       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 18:19:10.808876       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 18:19:10.808902       1 metrics.go:72] Registering metrics
	I1018 18:19:10.810480       1 controller.go:711] "Syncing nftables rules"
	I1018 18:19:20.610218       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 18:19:20.610272       1 main.go:301] handling current node
	I1018 18:19:30.608084       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 18:19:30.608121       1 main.go:301] handling current node
	
	
	==> kube-apiserver [77c8affe0fb5aad24ee709ae2cc82ce69951e9eab08928428179382c536325f8] <==
	I1018 18:18:50.941635       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1018 18:18:50.941762       1 shared_informer.go:318] Caches are synced for configmaps
	I1018 18:18:50.943863       1 controller.go:624] quota admission added evaluator for: namespaces
	I1018 18:18:50.954240       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1018 18:18:50.954347       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1018 18:18:50.955159       1 aggregator.go:166] initial CRD sync complete...
	I1018 18:18:50.955184       1 autoregister_controller.go:141] Starting autoregister controller
	I1018 18:18:50.955191       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 18:18:50.955199       1 cache.go:39] Caches are synced for autoregister controller
	I1018 18:18:50.979547       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 18:18:51.677300       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 18:18:51.682582       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 18:18:51.682601       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 18:18:52.302031       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 18:18:52.353233       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 18:18:52.519098       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 18:18:52.526213       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1018 18:18:52.527277       1 controller.go:624] quota admission added evaluator for: endpoints
	I1018 18:18:52.534372       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 18:18:52.866713       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1018 18:18:54.324488       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1018 18:18:54.340753       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 18:18:54.355109       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1018 18:19:07.167588       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1018 18:19:07.404548       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [57ccd19fc28fd98feb1a2adfdfe98cebfd91e4d4578a6065b14ebed0cbe9f02b] <==
	I1018 18:19:06.611279       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1018 18:19:06.658656       1 shared_informer.go:318] Caches are synced for disruption
	I1018 18:19:07.030853       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 18:19:07.063973       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 18:19:07.064002       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1018 18:19:07.188182       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-776dm"
	I1018 18:19:07.193613       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-l8wgz"
	I1018 18:19:07.433688       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1018 18:19:07.532979       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-r4vrp"
	I1018 18:19:07.551300       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-kd9bz"
	I1018 18:19:07.592178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="158.111031ms"
	I1018 18:19:07.626716       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="34.478428ms"
	I1018 18:19:07.626842       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.819µs"
	I1018 18:19:07.974947       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1018 18:19:08.025678       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-r4vrp"
	I1018 18:19:08.051181       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.511258ms"
	I1018 18:19:08.062153       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.914822ms"
	I1018 18:19:08.101583       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="39.363191ms"
	I1018 18:19:08.101762       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="115.038µs"
	I1018 18:19:21.037111       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="99.595µs"
	I1018 18:19:21.059525       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.604µs"
	I1018 18:19:21.462267       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1018 18:19:21.747221       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="104.921µs"
	I1018 18:19:21.786432       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.000027ms"
	I1018 18:19:21.786587       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="68.112µs"
	
	
	==> kube-proxy [de372e3908f407977ddaa621899498cfacf4c5c4828a9f417b59c20cbe184a6c] <==
	I1018 18:19:07.776462       1 server_others.go:69] "Using iptables proxy"
	I1018 18:19:07.818869       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1018 18:19:07.861458       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 18:19:07.863850       1 server_others.go:152] "Using iptables Proxier"
	I1018 18:19:07.863883       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1018 18:19:07.863890       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1018 18:19:07.863921       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1018 18:19:07.864113       1 server.go:846] "Version info" version="v1.28.0"
	I1018 18:19:07.864123       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:19:07.866455       1 config.go:188] "Starting service config controller"
	I1018 18:19:07.866478       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1018 18:19:07.866515       1 config.go:97] "Starting endpoint slice config controller"
	I1018 18:19:07.866520       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1018 18:19:07.874524       1 config.go:315] "Starting node config controller"
	I1018 18:19:07.874550       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1018 18:19:07.967060       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1018 18:19:07.967138       1 shared_informer.go:318] Caches are synced for service config
	I1018 18:19:07.975036       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e343fbbdfcd68241a4310acbe087834fbe1ee9c3e2031fdd239b170cf0ddae83] <==
	W1018 18:18:50.915828       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1018 18:18:50.915865       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1018 18:18:50.915919       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1018 18:18:50.915934       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1018 18:18:50.916002       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1018 18:18:50.916045       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1018 18:18:50.916170       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1018 18:18:50.916210       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1018 18:18:50.917129       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1018 18:18:50.917200       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1018 18:18:51.735816       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1018 18:18:51.735849       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1018 18:18:51.789875       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1018 18:18:51.789910       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1018 18:18:51.872664       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1018 18:18:51.872773       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1018 18:18:51.948413       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1018 18:18:51.948447       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1018 18:18:51.980087       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1018 18:18:51.980212       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1018 18:18:51.995692       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1018 18:18:51.995786       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1018 18:18:52.007963       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1018 18:18:52.008089       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1018 18:18:52.483656       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 18:19:07 old-k8s-version-918475 kubelet[1386]: I1018 18:19:07.290972    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1ce8f8fe-9578-4405-b71b-8dbb34c91ff8-cni-cfg\") pod \"kindnet-l8wgz\" (UID: \"1ce8f8fe-9578-4405-b71b-8dbb34c91ff8\") " pod="kube-system/kindnet-l8wgz"
	Oct 18 18:19:07 old-k8s-version-918475 kubelet[1386]: I1018 18:19:07.291046    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ce8f8fe-9578-4405-b71b-8dbb34c91ff8-lib-modules\") pod \"kindnet-l8wgz\" (UID: \"1ce8f8fe-9578-4405-b71b-8dbb34c91ff8\") " pod="kube-system/kindnet-l8wgz"
	Oct 18 18:19:07 old-k8s-version-918475 kubelet[1386]: I1018 18:19:07.291081    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8dc0388f-47c7-46e9-9f05-4815ce812559-xtables-lock\") pod \"kube-proxy-776dm\" (UID: \"8dc0388f-47c7-46e9-9f05-4815ce812559\") " pod="kube-system/kube-proxy-776dm"
	Oct 18 18:19:07 old-k8s-version-918475 kubelet[1386]: I1018 18:19:07.291197    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ce8f8fe-9578-4405-b71b-8dbb34c91ff8-xtables-lock\") pod \"kindnet-l8wgz\" (UID: \"1ce8f8fe-9578-4405-b71b-8dbb34c91ff8\") " pod="kube-system/kindnet-l8wgz"
	Oct 18 18:19:07 old-k8s-version-918475 kubelet[1386]: I1018 18:19:07.291332    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzbhh\" (UniqueName: \"kubernetes.io/projected/1ce8f8fe-9578-4405-b71b-8dbb34c91ff8-kube-api-access-qzbhh\") pod \"kindnet-l8wgz\" (UID: \"1ce8f8fe-9578-4405-b71b-8dbb34c91ff8\") " pod="kube-system/kindnet-l8wgz"
	Oct 18 18:19:07 old-k8s-version-918475 kubelet[1386]: I1018 18:19:07.291393    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8dc0388f-47c7-46e9-9f05-4815ce812559-kube-proxy\") pod \"kube-proxy-776dm\" (UID: \"8dc0388f-47c7-46e9-9f05-4815ce812559\") " pod="kube-system/kube-proxy-776dm"
	Oct 18 18:19:07 old-k8s-version-918475 kubelet[1386]: I1018 18:19:07.291444    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8dc0388f-47c7-46e9-9f05-4815ce812559-lib-modules\") pod \"kube-proxy-776dm\" (UID: \"8dc0388f-47c7-46e9-9f05-4815ce812559\") " pod="kube-system/kube-proxy-776dm"
	Oct 18 18:19:07 old-k8s-version-918475 kubelet[1386]: I1018 18:19:07.291528    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8g8d\" (UniqueName: \"kubernetes.io/projected/8dc0388f-47c7-46e9-9f05-4815ce812559-kube-api-access-p8g8d\") pod \"kube-proxy-776dm\" (UID: \"8dc0388f-47c7-46e9-9f05-4815ce812559\") " pod="kube-system/kube-proxy-776dm"
	Oct 18 18:19:07 old-k8s-version-918475 kubelet[1386]: W1018 18:19:07.541765    1386 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6/crio-ec7485968f0d0bac54c691911f8a3bba5a0ce206c84ff2d878206d9f0e05d203 WatchSource:0}: Error finding container ec7485968f0d0bac54c691911f8a3bba5a0ce206c84ff2d878206d9f0e05d203: Status 404 returned error can't find the container with id ec7485968f0d0bac54c691911f8a3bba5a0ce206c84ff2d878206d9f0e05d203
	Oct 18 18:19:07 old-k8s-version-918475 kubelet[1386]: W1018 18:19:07.546676    1386 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6/crio-93ebc65d910eb67cee9dcfbfab0d660bfbd40e733f95bb4c3f1fb535833fd6a9 WatchSource:0}: Error finding container 93ebc65d910eb67cee9dcfbfab0d660bfbd40e733f95bb4c3f1fb535833fd6a9: Status 404 returned error can't find the container with id 93ebc65d910eb67cee9dcfbfab0d660bfbd40e733f95bb4c3f1fb535833fd6a9
	Oct 18 18:19:10 old-k8s-version-918475 kubelet[1386]: I1018 18:19:10.724712    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-776dm" podStartSLOduration=3.724659711 podCreationTimestamp="2025-10-18 18:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 18:19:08.717762859 +0000 UTC m=+14.431491531" watchObservedRunningTime="2025-10-18 18:19:10.724659711 +0000 UTC m=+16.438388382"
	Oct 18 18:19:14 old-k8s-version-918475 kubelet[1386]: I1018 18:19:14.561697    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-l8wgz" podStartSLOduration=4.873521931 podCreationTimestamp="2025-10-18 18:19:07 +0000 UTC" firstStartedPulling="2025-10-18 18:19:07.555178355 +0000 UTC m=+13.268907027" lastFinishedPulling="2025-10-18 18:19:10.243301574 +0000 UTC m=+15.957030246" observedRunningTime="2025-10-18 18:19:10.730700592 +0000 UTC m=+16.444429272" watchObservedRunningTime="2025-10-18 18:19:14.56164515 +0000 UTC m=+20.275373846"
	Oct 18 18:19:20 old-k8s-version-918475 kubelet[1386]: I1018 18:19:20.998826    1386 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 18 18:19:21 old-k8s-version-918475 kubelet[1386]: I1018 18:19:21.035484    1386 topology_manager.go:215] "Topology Admit Handler" podUID="db934def-c206-49f5-93c1-5e9e72029aea" podNamespace="kube-system" podName="coredns-5dd5756b68-kd9bz"
	Oct 18 18:19:21 old-k8s-version-918475 kubelet[1386]: I1018 18:19:21.041634    1386 topology_manager.go:215] "Topology Admit Handler" podUID="486aafde-9949-4760-8b48-d58682b50726" podNamespace="kube-system" podName="storage-provisioner"
	Oct 18 18:19:21 old-k8s-version-918475 kubelet[1386]: I1018 18:19:21.092712    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db934def-c206-49f5-93c1-5e9e72029aea-config-volume\") pod \"coredns-5dd5756b68-kd9bz\" (UID: \"db934def-c206-49f5-93c1-5e9e72029aea\") " pod="kube-system/coredns-5dd5756b68-kd9bz"
	Oct 18 18:19:21 old-k8s-version-918475 kubelet[1386]: I1018 18:19:21.093023    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckk92\" (UniqueName: \"kubernetes.io/projected/db934def-c206-49f5-93c1-5e9e72029aea-kube-api-access-ckk92\") pod \"coredns-5dd5756b68-kd9bz\" (UID: \"db934def-c206-49f5-93c1-5e9e72029aea\") " pod="kube-system/coredns-5dd5756b68-kd9bz"
	Oct 18 18:19:21 old-k8s-version-918475 kubelet[1386]: I1018 18:19:21.194114    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgbcj\" (UniqueName: \"kubernetes.io/projected/486aafde-9949-4760-8b48-d58682b50726-kube-api-access-hgbcj\") pod \"storage-provisioner\" (UID: \"486aafde-9949-4760-8b48-d58682b50726\") " pod="kube-system/storage-provisioner"
	Oct 18 18:19:21 old-k8s-version-918475 kubelet[1386]: I1018 18:19:21.194201    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/486aafde-9949-4760-8b48-d58682b50726-tmp\") pod \"storage-provisioner\" (UID: \"486aafde-9949-4760-8b48-d58682b50726\") " pod="kube-system/storage-provisioner"
	Oct 18 18:19:21 old-k8s-version-918475 kubelet[1386]: W1018 18:19:21.352082    1386 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6/crio-850660f0f702a9f0dafd55517d6b96280cc9135dc885f6c5345151e031386b21 WatchSource:0}: Error finding container 850660f0f702a9f0dafd55517d6b96280cc9135dc885f6c5345151e031386b21: Status 404 returned error can't find the container with id 850660f0f702a9f0dafd55517d6b96280cc9135dc885f6c5345151e031386b21
	Oct 18 18:19:21 old-k8s-version-918475 kubelet[1386]: W1018 18:19:21.361753    1386 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6/crio-d4bd55de9e87a6c69f8c7f45bc5d0c0d7c32633c11c8d1c62d1d9035d49676e0 WatchSource:0}: Error finding container d4bd55de9e87a6c69f8c7f45bc5d0c0d7c32633c11c8d1c62d1d9035d49676e0: Status 404 returned error can't find the container with id d4bd55de9e87a6c69f8c7f45bc5d0c0d7c32633c11c8d1c62d1d9035d49676e0
	Oct 18 18:19:21 old-k8s-version-918475 kubelet[1386]: I1018 18:19:21.769419    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-kd9bz" podStartSLOduration=14.769354326 podCreationTimestamp="2025-10-18 18:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 18:19:21.747218873 +0000 UTC m=+27.460947545" watchObservedRunningTime="2025-10-18 18:19:21.769354326 +0000 UTC m=+27.483082998"
	Oct 18 18:19:23 old-k8s-version-918475 kubelet[1386]: I1018 18:19:23.724810    1386 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.72476088 podCreationTimestamp="2025-10-18 18:19:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 18:19:21.7957912 +0000 UTC m=+27.509519872" watchObservedRunningTime="2025-10-18 18:19:23.72476088 +0000 UTC m=+29.438489552"
	Oct 18 18:19:23 old-k8s-version-918475 kubelet[1386]: I1018 18:19:23.725047    1386 topology_manager.go:215] "Topology Admit Handler" podUID="d5268bf2-03ea-4390-b3f8-efc451427c93" podNamespace="default" podName="busybox"
	Oct 18 18:19:23 old-k8s-version-918475 kubelet[1386]: I1018 18:19:23.826919    1386 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwh7p\" (UniqueName: \"kubernetes.io/projected/d5268bf2-03ea-4390-b3f8-efc451427c93-kube-api-access-mwh7p\") pod \"busybox\" (UID: \"d5268bf2-03ea-4390-b3f8-efc451427c93\") " pod="default/busybox"
	
	
	==> storage-provisioner [cdacd624e3b27a382fb7b85bab7a558dc232e07e3a886ba8029cbf0d5c004a74] <==
	I1018 18:19:21.415199       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 18:19:21.439325       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 18:19:21.439477       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1018 18:19:21.492648       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"50769fb3-6713-46ea-856e-a4e705d84615", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-918475_b6ad945a-bb46-4351-9c50-02922edf88e0 became leader
	I1018 18:19:21.492760       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 18:19:21.492836       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-918475_b6ad945a-bb46-4351-9c50-02922edf88e0!
	I1018 18:19:21.593847       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-918475_b6ad945a-bb46-4351-9c50-02922edf88e0!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-918475 -n old-k8s-version-918475
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-918475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-918475 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-918475 --alsologtostderr -v=1: exit status 80 (2.013378336s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-918475 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 18:20:52.112224  196119 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:20:52.112493  196119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:20:52.112522  196119 out.go:374] Setting ErrFile to fd 2...
	I1018 18:20:52.112542  196119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:20:52.112840  196119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:20:52.113233  196119 out.go:368] Setting JSON to false
	I1018 18:20:52.113285  196119 mustload.go:65] Loading cluster: old-k8s-version-918475
	I1018 18:20:52.113746  196119 config.go:182] Loaded profile config "old-k8s-version-918475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 18:20:52.114921  196119 cli_runner.go:164] Run: docker container inspect old-k8s-version-918475 --format={{.State.Status}}
	I1018 18:20:52.134500  196119 host.go:66] Checking if "old-k8s-version-918475" exists ...
	I1018 18:20:52.134829  196119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:20:52.207119  196119 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-18 18:20:52.19710676 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:20:52.207799  196119 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-918475 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 18:20:52.211148  196119 out.go:179] * Pausing node old-k8s-version-918475 ... 
	I1018 18:20:52.214865  196119 host.go:66] Checking if "old-k8s-version-918475" exists ...
	I1018 18:20:52.215201  196119 ssh_runner.go:195] Run: systemctl --version
	I1018 18:20:52.215281  196119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:20:52.233962  196119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa Username:docker}
	I1018 18:20:52.339623  196119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:20:52.358584  196119 pause.go:52] kubelet running: true
	I1018 18:20:52.358676  196119 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 18:20:52.608391  196119 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 18:20:52.608483  196119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 18:20:52.681923  196119 cri.go:89] found id: "cd3f22f6532cd26d3f39cbbc7521c9d8a7a712934dd47081a9ad7584afc64c38"
	I1018 18:20:52.681947  196119 cri.go:89] found id: "37f6234e2a329ee90bd5cd471d64c651488562dd21c5b2e64d113322e20e47fe"
	I1018 18:20:52.681954  196119 cri.go:89] found id: "6b83bfedf7c37fc9bf7f3d03db7cee37209be54754656efb059c09c8f2eb2ceb"
	I1018 18:20:52.681958  196119 cri.go:89] found id: "235f17d855192310ccb1a489b3d0c7f7ebbad52420790a930e349f341c3e8d8f"
	I1018 18:20:52.681962  196119 cri.go:89] found id: "788c1c65cd78f7ed26e15732fc3c949da8652d1331bb7a89fd1b2fa40c67386f"
	I1018 18:20:52.681965  196119 cri.go:89] found id: "ae1eebdab3cf71a07bd4eae6a705ba7ff86c020ba58671cfcc9759010c46c239"
	I1018 18:20:52.681968  196119 cri.go:89] found id: "c25897e752da3dc12951f02ee89f3ec475fb055cb90f27cb1ccc4cde1fc8c6de"
	I1018 18:20:52.681971  196119 cri.go:89] found id: "4c59d57fbe375bf1c3be0746a92ca4d85fdf6a06d96f6437faeb7e9324c89a9b"
	I1018 18:20:52.681975  196119 cri.go:89] found id: "ccff0d24759b80bcd65a9894c590246579f8cca877d5c90bf983408a0b729bb9"
	I1018 18:20:52.681981  196119 cri.go:89] found id: "3392b5258cdf8bfa7e27174e4bb9b951b93b4fad5ce1d4d1606afad1f4c6d3da"
	I1018 18:20:52.681985  196119 cri.go:89] found id: "d89d2a46547ff61a18edd8b04922f7e9a116c1917c95fdc49f17527d20b9e15e"
	I1018 18:20:52.681988  196119 cri.go:89] found id: ""
	I1018 18:20:52.682039  196119 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 18:20:52.693427  196119 retry.go:31] will retry after 297.246354ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:20:52Z" level=error msg="open /run/runc: no such file or directory"
	I1018 18:20:52.991824  196119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:20:53.008539  196119 pause.go:52] kubelet running: false
	I1018 18:20:53.008662  196119 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 18:20:53.212355  196119 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 18:20:53.212474  196119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 18:20:53.306480  196119 cri.go:89] found id: "cd3f22f6532cd26d3f39cbbc7521c9d8a7a712934dd47081a9ad7584afc64c38"
	I1018 18:20:53.306544  196119 cri.go:89] found id: "37f6234e2a329ee90bd5cd471d64c651488562dd21c5b2e64d113322e20e47fe"
	I1018 18:20:53.306562  196119 cri.go:89] found id: "6b83bfedf7c37fc9bf7f3d03db7cee37209be54754656efb059c09c8f2eb2ceb"
	I1018 18:20:53.306579  196119 cri.go:89] found id: "235f17d855192310ccb1a489b3d0c7f7ebbad52420790a930e349f341c3e8d8f"
	I1018 18:20:53.306596  196119 cri.go:89] found id: "788c1c65cd78f7ed26e15732fc3c949da8652d1331bb7a89fd1b2fa40c67386f"
	I1018 18:20:53.306626  196119 cri.go:89] found id: "ae1eebdab3cf71a07bd4eae6a705ba7ff86c020ba58671cfcc9759010c46c239"
	I1018 18:20:53.306651  196119 cri.go:89] found id: "c25897e752da3dc12951f02ee89f3ec475fb055cb90f27cb1ccc4cde1fc8c6de"
	I1018 18:20:53.306675  196119 cri.go:89] found id: "4c59d57fbe375bf1c3be0746a92ca4d85fdf6a06d96f6437faeb7e9324c89a9b"
	I1018 18:20:53.306691  196119 cri.go:89] found id: "ccff0d24759b80bcd65a9894c590246579f8cca877d5c90bf983408a0b729bb9"
	I1018 18:20:53.306709  196119 cri.go:89] found id: "3392b5258cdf8bfa7e27174e4bb9b951b93b4fad5ce1d4d1606afad1f4c6d3da"
	I1018 18:20:53.306735  196119 cri.go:89] found id: "d89d2a46547ff61a18edd8b04922f7e9a116c1917c95fdc49f17527d20b9e15e"
	I1018 18:20:53.306756  196119 cri.go:89] found id: ""
	I1018 18:20:53.306819  196119 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 18:20:53.324869  196119 retry.go:31] will retry after 452.252959ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:20:53Z" level=error msg="open /run/runc: no such file or directory"
	I1018 18:20:53.777486  196119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:20:53.790458  196119 pause.go:52] kubelet running: false
	I1018 18:20:53.790524  196119 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 18:20:53.952059  196119 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 18:20:53.952190  196119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 18:20:54.033557  196119 cri.go:89] found id: "cd3f22f6532cd26d3f39cbbc7521c9d8a7a712934dd47081a9ad7584afc64c38"
	I1018 18:20:54.033627  196119 cri.go:89] found id: "37f6234e2a329ee90bd5cd471d64c651488562dd21c5b2e64d113322e20e47fe"
	I1018 18:20:54.033645  196119 cri.go:89] found id: "6b83bfedf7c37fc9bf7f3d03db7cee37209be54754656efb059c09c8f2eb2ceb"
	I1018 18:20:54.033662  196119 cri.go:89] found id: "235f17d855192310ccb1a489b3d0c7f7ebbad52420790a930e349f341c3e8d8f"
	I1018 18:20:54.033673  196119 cri.go:89] found id: "788c1c65cd78f7ed26e15732fc3c949da8652d1331bb7a89fd1b2fa40c67386f"
	I1018 18:20:54.033677  196119 cri.go:89] found id: "ae1eebdab3cf71a07bd4eae6a705ba7ff86c020ba58671cfcc9759010c46c239"
	I1018 18:20:54.033681  196119 cri.go:89] found id: "c25897e752da3dc12951f02ee89f3ec475fb055cb90f27cb1ccc4cde1fc8c6de"
	I1018 18:20:54.033684  196119 cri.go:89] found id: "4c59d57fbe375bf1c3be0746a92ca4d85fdf6a06d96f6437faeb7e9324c89a9b"
	I1018 18:20:54.033687  196119 cri.go:89] found id: "ccff0d24759b80bcd65a9894c590246579f8cca877d5c90bf983408a0b729bb9"
	I1018 18:20:54.033717  196119 cri.go:89] found id: "3392b5258cdf8bfa7e27174e4bb9b951b93b4fad5ce1d4d1606afad1f4c6d3da"
	I1018 18:20:54.033726  196119 cri.go:89] found id: "d89d2a46547ff61a18edd8b04922f7e9a116c1917c95fdc49f17527d20b9e15e"
	I1018 18:20:54.033730  196119 cri.go:89] found id: ""
	I1018 18:20:54.033812  196119 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 18:20:54.053336  196119 out.go:203] 
	W1018 18:20:54.057288  196119 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:20:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:20:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 18:20:54.057372  196119 out.go:285] * 
	* 
	W1018 18:20:54.063811  196119 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 18:20:54.066214  196119 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-918475 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-918475
helpers_test.go:243: (dbg) docker inspect old-k8s-version-918475:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6",
	        "Created": "2025-10-18T18:18:25.775142041Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 194009,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T18:19:48.922861232Z",
	            "FinishedAt": "2025-10-18T18:19:48.122280911Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6/hostname",
	        "HostsPath": "/var/lib/docker/containers/13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6/hosts",
	        "LogPath": "/var/lib/docker/containers/13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6/13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6-json.log",
	        "Name": "/old-k8s-version-918475",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-918475:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-918475",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6",
	                "LowerDir": "/var/lib/docker/overlay2/3cbaaca74a96e66e2281894a8ded9a8b4932ecc5b1eaa08dd2c608cf2a8fb5aa-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3cbaaca74a96e66e2281894a8ded9a8b4932ecc5b1eaa08dd2c608cf2a8fb5aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3cbaaca74a96e66e2281894a8ded9a8b4932ecc5b1eaa08dd2c608cf2a8fb5aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3cbaaca74a96e66e2281894a8ded9a8b4932ecc5b1eaa08dd2c608cf2a8fb5aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-918475",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-918475/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-918475",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-918475",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-918475",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c0b0a14b29f981955e7560c774ae2a0df30edf2afdaad0443d92b82ad128c683",
	            "SandboxKey": "/var/run/docker/netns/c0b0a14b29f9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-918475": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:97:f6:bc:a2:3c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2f21c2763fceb3911d220e045f4c363e42b3b9b9b29b62d56c07c23b82cc830b",
	                    "EndpointID": "572525a6cf4544c4163ac9f23cbe971eb1a12786d5a30a6aea660308907d2e4c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-918475",
	                        "13ab62783a42"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-918475 -n old-k8s-version-918475
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-918475 -n old-k8s-version-918475: exit status 2 (359.24846ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-918475 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-918475 logs -n 25: (1.409176359s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-111074 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo containerd config dump                                                                                                                                                                                                  │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo crio config                                                                                                                                                                                                             │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ delete  │ -p cilium-111074                                                                                                                                                                                                                              │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │ 18 Oct 25 18:16 UTC │
	│ start   │ -p force-systemd-env-785999 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-785999 │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │ 18 Oct 25 18:17 UTC │
	│ pause   │ -p pause-321903 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-321903             │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │                     │
	│ delete  │ -p pause-321903                                                                                                                                                                                                                               │ pause-321903             │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │ 18 Oct 25 18:17 UTC │
	│ start   │ -p cert-expiration-463770 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-463770   │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │ 18 Oct 25 18:18 UTC │
	│ delete  │ -p force-systemd-env-785999                                                                                                                                                                                                                   │ force-systemd-env-785999 │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │ 18 Oct 25 18:17 UTC │
	│ start   │ -p cert-options-327418 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-327418      │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │ 18 Oct 25 18:18 UTC │
	│ ssh     │ cert-options-327418 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-327418      │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:18 UTC │
	│ ssh     │ -p cert-options-327418 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-327418      │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:18 UTC │
	│ delete  │ -p cert-options-327418                                                                                                                                                                                                                        │ cert-options-327418      │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:18 UTC │
	│ start   │ -p old-k8s-version-918475 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-918475   │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:19 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-918475 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-918475   │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │                     │
	│ stop    │ -p old-k8s-version-918475 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-918475   │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │ 18 Oct 25 18:19 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-918475 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-918475   │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │ 18 Oct 25 18:19 UTC │
	│ start   │ -p old-k8s-version-918475 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-918475   │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │ 18 Oct 25 18:20 UTC │
	│ image   │ old-k8s-version-918475 image list --format=json                                                                                                                                                                                               │ old-k8s-version-918475   │ jenkins │ v1.37.0 │ 18 Oct 25 18:20 UTC │ 18 Oct 25 18:20 UTC │
	│ pause   │ -p old-k8s-version-918475 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-918475   │ jenkins │ v1.37.0 │ 18 Oct 25 18:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 18:19:48
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 18:19:48.652019  193878 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:19:48.652149  193878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:19:48.652159  193878 out.go:374] Setting ErrFile to fd 2...
	I1018 18:19:48.652163  193878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:19:48.652417  193878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:19:48.652853  193878 out.go:368] Setting JSON to false
	I1018 18:19:48.653906  193878 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7338,"bootTime":1760804251,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 18:19:48.653979  193878 start.go:141] virtualization:  
	I1018 18:19:48.656966  193878 out.go:179] * [old-k8s-version-918475] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 18:19:48.660884  193878 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 18:19:48.660915  193878 notify.go:220] Checking for updates...
	I1018 18:19:48.663842  193878 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 18:19:48.666993  193878 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:19:48.669960  193878 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 18:19:48.673081  193878 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 18:19:48.675953  193878 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 18:19:48.679417  193878 config.go:182] Loaded profile config "old-k8s-version-918475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 18:19:48.682926  193878 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1018 18:19:48.685814  193878 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 18:19:48.723526  193878 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 18:19:48.723647  193878 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:19:48.777653  193878 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 18:19:48.767934557 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:19:48.777755  193878 docker.go:318] overlay module found
	I1018 18:19:48.781004  193878 out.go:179] * Using the docker driver based on existing profile
	I1018 18:19:48.783877  193878 start.go:305] selected driver: docker
	I1018 18:19:48.783893  193878 start.go:925] validating driver "docker" against &{Name:old-k8s-version-918475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-918475 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:19:48.783982  193878 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 18:19:48.784731  193878 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:19:48.836456  193878 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 18:19:48.823372055 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:19:48.836799  193878 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:19:48.836827  193878 cni.go:84] Creating CNI manager for ""
	I1018 18:19:48.836884  193878 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:19:48.836927  193878 start.go:349] cluster config:
	{Name:old-k8s-version-918475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-918475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:19:48.840367  193878 out.go:179] * Starting "old-k8s-version-918475" primary control-plane node in "old-k8s-version-918475" cluster
	I1018 18:19:48.843205  193878 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 18:19:48.846551  193878 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 18:19:48.849573  193878 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 18:19:48.849633  193878 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1018 18:19:48.849648  193878 cache.go:58] Caching tarball of preloaded images
	I1018 18:19:48.849660  193878 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 18:19:48.849730  193878 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 18:19:48.849740  193878 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1018 18:19:48.849843  193878 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/config.json ...
	I1018 18:19:48.868734  193878 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 18:19:48.868757  193878 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 18:19:48.868776  193878 cache.go:232] Successfully downloaded all kic artifacts
	I1018 18:19:48.868800  193878 start.go:360] acquireMachinesLock for old-k8s-version-918475: {Name:mke4efc3cc1fc03dd6efc3fd3e060d8181392707 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:19:48.868868  193878 start.go:364] duration metric: took 45.137µs to acquireMachinesLock for "old-k8s-version-918475"
	I1018 18:19:48.868891  193878 start.go:96] Skipping create...Using existing machine configuration
	I1018 18:19:48.868903  193878 fix.go:54] fixHost starting: 
	I1018 18:19:48.869199  193878 cli_runner.go:164] Run: docker container inspect old-k8s-version-918475 --format={{.State.Status}}
	I1018 18:19:48.886120  193878 fix.go:112] recreateIfNeeded on old-k8s-version-918475: state=Stopped err=<nil>
	W1018 18:19:48.886149  193878 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 18:19:48.889385  193878 out.go:252] * Restarting existing docker container for "old-k8s-version-918475" ...
	I1018 18:19:48.889464  193878 cli_runner.go:164] Run: docker start old-k8s-version-918475
	I1018 18:19:49.151152  193878 cli_runner.go:164] Run: docker container inspect old-k8s-version-918475 --format={{.State.Status}}
	I1018 18:19:49.175402  193878 kic.go:430] container "old-k8s-version-918475" state is running.
	I1018 18:19:49.176743  193878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-918475
	I1018 18:19:49.198777  193878 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/config.json ...
	I1018 18:19:49.199000  193878 machine.go:93] provisionDockerMachine start ...
	I1018 18:19:49.199056  193878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:49.221834  193878 main.go:141] libmachine: Using SSH client type: native
	I1018 18:19:49.222160  193878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1018 18:19:49.222174  193878 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 18:19:49.222785  193878 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 18:19:52.376652  193878 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-918475
	
	I1018 18:19:52.376677  193878 ubuntu.go:182] provisioning hostname "old-k8s-version-918475"
	I1018 18:19:52.376750  193878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:52.394298  193878 main.go:141] libmachine: Using SSH client type: native
	I1018 18:19:52.394660  193878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1018 18:19:52.394679  193878 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-918475 && echo "old-k8s-version-918475" | sudo tee /etc/hostname
	I1018 18:19:52.551050  193878 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-918475
	
	I1018 18:19:52.551168  193878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:52.568908  193878 main.go:141] libmachine: Using SSH client type: native
	I1018 18:19:52.569271  193878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1018 18:19:52.569297  193878 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-918475' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-918475/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-918475' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 18:19:52.728974  193878 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 18:19:52.728999  193878 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 18:19:52.729029  193878 ubuntu.go:190] setting up certificates
	I1018 18:19:52.729040  193878 provision.go:84] configureAuth start
	I1018 18:19:52.729109  193878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-918475
	I1018 18:19:52.747396  193878 provision.go:143] copyHostCerts
	I1018 18:19:52.747468  193878 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 18:19:52.747486  193878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 18:19:52.747564  193878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 18:19:52.747676  193878 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 18:19:52.747687  193878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 18:19:52.747720  193878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 18:19:52.747789  193878 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 18:19:52.747798  193878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 18:19:52.747823  193878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 18:19:52.747883  193878 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-918475 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-918475]
	I1018 18:19:53.218890  193878 provision.go:177] copyRemoteCerts
	I1018 18:19:53.218956  193878 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 18:19:53.218995  193878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:53.237300  193878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa Username:docker}
	I1018 18:19:53.345447  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 18:19:53.363923  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1018 18:19:53.380909  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 18:19:53.398749  193878 provision.go:87] duration metric: took 669.682007ms to configureAuth
	I1018 18:19:53.398777  193878 ubuntu.go:206] setting minikube options for container-runtime
	I1018 18:19:53.398963  193878 config.go:182] Loaded profile config "old-k8s-version-918475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 18:19:53.399076  193878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:53.416010  193878 main.go:141] libmachine: Using SSH client type: native
	I1018 18:19:53.416313  193878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1018 18:19:53.416333  193878 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 18:19:53.727261  193878 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 18:19:53.727327  193878 machine.go:96] duration metric: took 4.528317466s to provisionDockerMachine
	I1018 18:19:53.727350  193878 start.go:293] postStartSetup for "old-k8s-version-918475" (driver="docker")
	I1018 18:19:53.727375  193878 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 18:19:53.727486  193878 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 18:19:53.727568  193878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:53.754868  193878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa Username:docker}
	I1018 18:19:53.861028  193878 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 18:19:53.864441  193878 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 18:19:53.864469  193878 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 18:19:53.864481  193878 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 18:19:53.864543  193878 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 18:19:53.864630  193878 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 18:19:53.864752  193878 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 18:19:53.872357  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:19:53.890538  193878 start.go:296] duration metric: took 163.159117ms for postStartSetup
	I1018 18:19:53.890633  193878 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 18:19:53.890678  193878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:53.910117  193878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa Username:docker}
	I1018 18:19:54.011279  193878 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 18:19:54.016503  193878 fix.go:56] duration metric: took 5.147592693s for fixHost
	I1018 18:19:54.016530  193878 start.go:83] releasing machines lock for "old-k8s-version-918475", held for 5.147650138s
	I1018 18:19:54.016596  193878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-918475
	I1018 18:19:54.034209  193878 ssh_runner.go:195] Run: cat /version.json
	I1018 18:19:54.034266  193878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:54.034360  193878 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 18:19:54.034421  193878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:54.056390  193878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa Username:docker}
	I1018 18:19:54.066219  193878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa Username:docker}
	I1018 18:19:54.156759  193878 ssh_runner.go:195] Run: systemctl --version
	I1018 18:19:54.248077  193878 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 18:19:54.287388  193878 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 18:19:54.291592  193878 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 18:19:54.291690  193878 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 18:19:54.299350  193878 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 18:19:54.299371  193878 start.go:495] detecting cgroup driver to use...
	I1018 18:19:54.299402  193878 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 18:19:54.299448  193878 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 18:19:54.320516  193878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 18:19:54.333345  193878 docker.go:218] disabling cri-docker service (if available) ...
	I1018 18:19:54.333415  193878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 18:19:54.349725  193878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 18:19:54.362999  193878 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 18:19:54.475585  193878 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 18:19:54.594260  193878 docker.go:234] disabling docker service ...
	I1018 18:19:54.594338  193878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 18:19:54.610726  193878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 18:19:54.624822  193878 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 18:19:54.734025  193878 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 18:19:54.846123  193878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 18:19:54.858801  193878 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 18:19:54.874149  193878 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1018 18:19:54.874215  193878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:19:54.883739  193878 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 18:19:54.883803  193878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:19:54.893336  193878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:19:54.902624  193878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:19:54.912145  193878 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 18:19:54.922144  193878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:19:54.931114  193878 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:19:54.939540  193878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:19:54.948390  193878 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 18:19:54.955769  193878 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 18:19:54.965751  193878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:19:55.083437  193878 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 18:19:55.213108  193878 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 18:19:55.213194  193878 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 18:19:55.217031  193878 start.go:563] Will wait 60s for crictl version
	I1018 18:19:55.217093  193878 ssh_runner.go:195] Run: which crictl
	I1018 18:19:55.220525  193878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 18:19:55.245213  193878 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 18:19:55.245294  193878 ssh_runner.go:195] Run: crio --version
	I1018 18:19:55.277688  193878 ssh_runner.go:195] Run: crio --version
	I1018 18:19:55.308375  193878 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1018 18:19:55.311498  193878 cli_runner.go:164] Run: docker network inspect old-k8s-version-918475 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:19:55.328252  193878 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 18:19:55.331964  193878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:19:55.342037  193878 kubeadm.go:883] updating cluster {Name:old-k8s-version-918475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-918475 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 18:19:55.342176  193878 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 18:19:55.342238  193878 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:19:55.373820  193878 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:19:55.373845  193878 crio.go:433] Images already preloaded, skipping extraction
	I1018 18:19:55.373899  193878 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:19:55.402294  193878 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:19:55.402317  193878 cache_images.go:85] Images are preloaded, skipping loading
	I1018 18:19:55.402324  193878 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1018 18:19:55.402420  193878 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-918475 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-918475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 18:19:55.402500  193878 ssh_runner.go:195] Run: crio config
	I1018 18:19:55.454891  193878 cni.go:84] Creating CNI manager for ""
	I1018 18:19:55.454974  193878 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:19:55.455013  193878 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 18:19:55.455067  193878 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-918475 NodeName:old-k8s-version-918475 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 18:19:55.455258  193878 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-918475"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 18:19:55.455384  193878 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1018 18:19:55.463050  193878 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 18:19:55.463113  193878 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 18:19:55.470173  193878 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1018 18:19:55.482447  193878 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 18:19:55.494861  193878 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1018 18:19:55.507835  193878 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 18:19:55.512607  193878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:19:55.522846  193878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:19:55.644233  193878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:19:55.662524  193878 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475 for IP: 192.168.76.2
	I1018 18:19:55.662546  193878 certs.go:195] generating shared ca certs ...
	I1018 18:19:55.662562  193878 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:19:55.662707  193878 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 18:19:55.662758  193878 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 18:19:55.662769  193878 certs.go:257] generating profile certs ...
	I1018 18:19:55.662847  193878 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/client.key
	I1018 18:19:55.662917  193878 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/apiserver.key.630d08a5
	I1018 18:19:55.662958  193878 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/proxy-client.key
	I1018 18:19:55.663067  193878 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 18:19:55.663095  193878 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 18:19:55.663110  193878 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 18:19:55.663140  193878 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 18:19:55.663165  193878 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 18:19:55.663189  193878 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 18:19:55.663240  193878 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:19:55.663825  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 18:19:55.697908  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 18:19:55.719034  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 18:19:55.742892  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 18:19:55.763382  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 18:19:55.782703  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 18:19:55.802998  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 18:19:55.842878  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 18:19:55.871374  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 18:19:55.891907  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 18:19:55.917013  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 18:19:55.937902  193878 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 18:19:55.952126  193878 ssh_runner.go:195] Run: openssl version
	I1018 18:19:55.958684  193878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 18:19:55.967593  193878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 18:19:55.971503  193878 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 18:19:55.971606  193878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 18:19:56.022485  193878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 18:19:56.030397  193878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 18:19:56.038577  193878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 18:19:56.042588  193878 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 18:19:56.042702  193878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 18:19:56.083842  193878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 18:19:56.091889  193878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 18:19:56.099951  193878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:19:56.103711  193878 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:19:56.103776  193878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:19:56.147337  193878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 18:19:56.157492  193878 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 18:19:56.161475  193878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 18:19:56.202766  193878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 18:19:56.245388  193878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 18:19:56.298040  193878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 18:19:56.382699  193878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 18:19:56.453670  193878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 18:19:56.533698  193878 kubeadm.go:400] StartCluster: {Name:old-k8s-version-918475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-918475 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:19:56.533796  193878 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 18:19:56.533885  193878 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 18:19:56.583189  193878 cri.go:89] found id: "ae1eebdab3cf71a07bd4eae6a705ba7ff86c020ba58671cfcc9759010c46c239"
	I1018 18:19:56.583226  193878 cri.go:89] found id: "c25897e752da3dc12951f02ee89f3ec475fb055cb90f27cb1ccc4cde1fc8c6de"
	I1018 18:19:56.583231  193878 cri.go:89] found id: "4c59d57fbe375bf1c3be0746a92ca4d85fdf6a06d96f6437faeb7e9324c89a9b"
	I1018 18:19:56.583236  193878 cri.go:89] found id: "ccff0d24759b80bcd65a9894c590246579f8cca877d5c90bf983408a0b729bb9"
	I1018 18:19:56.583242  193878 cri.go:89] found id: ""
	I1018 18:19:56.583304  193878 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 18:19:56.600536  193878 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:19:56Z" level=error msg="open /run/runc: no such file or directory"
	I1018 18:19:56.600652  193878 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 18:19:56.613597  193878 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 18:19:56.613619  193878 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 18:19:56.613681  193878 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 18:19:56.626893  193878 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 18:19:56.627560  193878 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-918475" does not appear in /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:19:56.627846  193878 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-2509/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-918475" cluster setting kubeconfig missing "old-k8s-version-918475" context setting]
	I1018 18:19:56.629157  193878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:19:56.630799  193878 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 18:19:56.641565  193878 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 18:19:56.641605  193878 kubeadm.go:601] duration metric: took 27.980451ms to restartPrimaryControlPlane
	I1018 18:19:56.641617  193878 kubeadm.go:402] duration metric: took 107.927964ms to StartCluster
	I1018 18:19:56.641646  193878 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:19:56.641718  193878 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:19:56.642843  193878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:19:56.643120  193878 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 18:19:56.643478  193878 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 18:19:56.643574  193878 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-918475"
	I1018 18:19:56.643590  193878 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-918475"
	W1018 18:19:56.643596  193878 addons.go:247] addon storage-provisioner should already be in state true
	I1018 18:19:56.643620  193878 host.go:66] Checking if "old-k8s-version-918475" exists ...
	I1018 18:19:56.643620  193878 addons.go:69] Setting dashboard=true in profile "old-k8s-version-918475"
	I1018 18:19:56.643635  193878 addons.go:238] Setting addon dashboard=true in "old-k8s-version-918475"
	W1018 18:19:56.643641  193878 addons.go:247] addon dashboard should already be in state true
	I1018 18:19:56.643661  193878 host.go:66] Checking if "old-k8s-version-918475" exists ...
	I1018 18:19:56.644086  193878 cli_runner.go:164] Run: docker container inspect old-k8s-version-918475 --format={{.State.Status}}
	I1018 18:19:56.644167  193878 cli_runner.go:164] Run: docker container inspect old-k8s-version-918475 --format={{.State.Status}}
	I1018 18:19:56.644486  193878 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-918475"
	I1018 18:19:56.644510  193878 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-918475"
	I1018 18:19:56.644801  193878 cli_runner.go:164] Run: docker container inspect old-k8s-version-918475 --format={{.State.Status}}
	I1018 18:19:56.643552  193878 config.go:182] Loaded profile config "old-k8s-version-918475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 18:19:56.653006  193878 out.go:179] * Verifying Kubernetes components...
	I1018 18:19:56.656210  193878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:19:56.708843  193878 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:19:56.711915  193878 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:19:56.711977  193878 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 18:19:56.712158  193878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:56.713144  193878 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-918475"
	W1018 18:19:56.713174  193878 addons.go:247] addon default-storageclass should already be in state true
	I1018 18:19:56.713203  193878 host.go:66] Checking if "old-k8s-version-918475" exists ...
	I1018 18:19:56.713677  193878 cli_runner.go:164] Run: docker container inspect old-k8s-version-918475 --format={{.State.Status}}
	I1018 18:19:56.719231  193878 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 18:19:56.722141  193878 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 18:19:56.729073  193878 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 18:19:56.729101  193878 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 18:19:56.729177  193878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:56.768196  193878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa Username:docker}
	I1018 18:19:56.775064  193878 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 18:19:56.775083  193878 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 18:19:56.775142  193878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:56.807070  193878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa Username:docker}
	I1018 18:19:56.815428  193878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa Username:docker}
	I1018 18:19:57.003625  193878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:19:57.027990  193878 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-918475" to be "Ready" ...
	I1018 18:19:57.041749  193878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:19:57.133289  193878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 18:19:57.158380  193878 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 18:19:57.158459  193878 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 18:19:57.227507  193878 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 18:19:57.227577  193878 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 18:19:57.285024  193878 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 18:19:57.285100  193878 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 18:19:57.341298  193878 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 18:19:57.341369  193878 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 18:19:57.376542  193878 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 18:19:57.376617  193878 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 18:19:57.404906  193878 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 18:19:57.405009  193878 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 18:19:57.427804  193878 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 18:19:57.427891  193878 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 18:19:57.449447  193878 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 18:19:57.449526  193878 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 18:19:57.469999  193878 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 18:19:57.470078  193878 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 18:19:57.488719  193878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 18:20:01.555469  193878 node_ready.go:49] node "old-k8s-version-918475" is "Ready"
	I1018 18:20:01.555500  193878 node_ready.go:38] duration metric: took 4.527427734s for node "old-k8s-version-918475" to be "Ready" ...
	I1018 18:20:01.555517  193878 api_server.go:52] waiting for apiserver process to appear ...
	I1018 18:20:01.555581  193878 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 18:20:03.104884  193878 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.971513171s)
	I1018 18:20:03.105885  193878 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.064054666s)
	I1018 18:20:03.658847  193878 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.170041963s)
	I1018 18:20:03.658994  193878 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.103403295s)
	I1018 18:20:03.659015  193878 api_server.go:72] duration metric: took 7.015857251s to wait for apiserver process to appear ...
	I1018 18:20:03.659022  193878 api_server.go:88] waiting for apiserver healthz status ...
	I1018 18:20:03.659044  193878 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 18:20:03.661815  193878 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-918475 addons enable metrics-server
	
	I1018 18:20:03.664763  193878 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1018 18:20:03.667766  193878 addons.go:514] duration metric: took 7.024282434s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1018 18:20:03.673222  193878 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 18:20:03.674672  193878 api_server.go:141] control plane version: v1.28.0
	I1018 18:20:03.674697  193878 api_server.go:131] duration metric: took 15.665061ms to wait for apiserver health ...
	I1018 18:20:03.674706  193878 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 18:20:03.683184  193878 system_pods.go:59] 8 kube-system pods found
	I1018 18:20:03.683224  193878 system_pods.go:61] "coredns-5dd5756b68-kd9bz" [db934def-c206-49f5-93c1-5e9e72029aea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:20:03.683233  193878 system_pods.go:61] "etcd-old-k8s-version-918475" [52e60769-ce25-4039-9816-8eee5939547b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 18:20:03.683239  193878 system_pods.go:61] "kindnet-l8wgz" [1ce8f8fe-9578-4405-b71b-8dbb34c91ff8] Running
	I1018 18:20:03.683246  193878 system_pods.go:61] "kube-apiserver-old-k8s-version-918475" [bb13f0ff-7082-4594-b7a9-082fae97e8b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 18:20:03.683254  193878 system_pods.go:61] "kube-controller-manager-old-k8s-version-918475" [11c22b96-b426-4049-b453-30869431916f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 18:20:03.683264  193878 system_pods.go:61] "kube-proxy-776dm" [8dc0388f-47c7-46e9-9f05-4815ce812559] Running
	I1018 18:20:03.683272  193878 system_pods.go:61] "kube-scheduler-old-k8s-version-918475" [b2f9fdec-0d90-4575-a638-f9ed0457ae29] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 18:20:03.683280  193878 system_pods.go:61] "storage-provisioner" [486aafde-9949-4760-8b48-d58682b50726] Running
	I1018 18:20:03.683286  193878 system_pods.go:74] duration metric: took 8.574526ms to wait for pod list to return data ...
	I1018 18:20:03.683294  193878 default_sa.go:34] waiting for default service account to be created ...
	I1018 18:20:03.688704  193878 default_sa.go:45] found service account: "default"
	I1018 18:20:03.688733  193878 default_sa.go:55] duration metric: took 5.430871ms for default service account to be created ...
	I1018 18:20:03.688750  193878 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 18:20:03.694442  193878 system_pods.go:86] 8 kube-system pods found
	I1018 18:20:03.694474  193878 system_pods.go:89] "coredns-5dd5756b68-kd9bz" [db934def-c206-49f5-93c1-5e9e72029aea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:20:03.694485  193878 system_pods.go:89] "etcd-old-k8s-version-918475" [52e60769-ce25-4039-9816-8eee5939547b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 18:20:03.694491  193878 system_pods.go:89] "kindnet-l8wgz" [1ce8f8fe-9578-4405-b71b-8dbb34c91ff8] Running
	I1018 18:20:03.694499  193878 system_pods.go:89] "kube-apiserver-old-k8s-version-918475" [bb13f0ff-7082-4594-b7a9-082fae97e8b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 18:20:03.694505  193878 system_pods.go:89] "kube-controller-manager-old-k8s-version-918475" [11c22b96-b426-4049-b453-30869431916f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 18:20:03.694511  193878 system_pods.go:89] "kube-proxy-776dm" [8dc0388f-47c7-46e9-9f05-4815ce812559] Running
	I1018 18:20:03.694517  193878 system_pods.go:89] "kube-scheduler-old-k8s-version-918475" [b2f9fdec-0d90-4575-a638-f9ed0457ae29] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 18:20:03.694521  193878 system_pods.go:89] "storage-provisioner" [486aafde-9949-4760-8b48-d58682b50726] Running
	I1018 18:20:03.694528  193878 system_pods.go:126] duration metric: took 5.773079ms to wait for k8s-apps to be running ...
	I1018 18:20:03.694536  193878 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 18:20:03.694590  193878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:20:03.711834  193878 system_svc.go:56] duration metric: took 17.289952ms WaitForService to wait for kubelet
	I1018 18:20:03.711908  193878 kubeadm.go:586] duration metric: took 7.068747468s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:20:03.711942  193878 node_conditions.go:102] verifying NodePressure condition ...
	I1018 18:20:03.729105  193878 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 18:20:03.729179  193878 node_conditions.go:123] node cpu capacity is 2
	I1018 18:20:03.729206  193878 node_conditions.go:105] duration metric: took 17.247055ms to run NodePressure ...
	I1018 18:20:03.729230  193878 start.go:241] waiting for startup goroutines ...
	I1018 18:20:03.729264  193878 start.go:246] waiting for cluster config update ...
	I1018 18:20:03.729293  193878 start.go:255] writing updated cluster config ...
	I1018 18:20:03.729613  193878 ssh_runner.go:195] Run: rm -f paused
	I1018 18:20:03.733633  193878 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:20:03.743608  193878 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-kd9bz" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 18:20:05.750222  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:08.250213  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:10.750092  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:13.251980  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:15.751520  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:18.250317  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:20.250362  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:22.251690  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:24.751819  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:27.250286  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:29.250605  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:31.750601  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:34.251642  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:36.749912  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	I1018 18:20:38.754922  193878 pod_ready.go:94] pod "coredns-5dd5756b68-kd9bz" is "Ready"
	I1018 18:20:38.754950  193878 pod_ready.go:86] duration metric: took 35.011277058s for pod "coredns-5dd5756b68-kd9bz" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:20:38.757906  193878 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-918475" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:20:38.763879  193878 pod_ready.go:94] pod "etcd-old-k8s-version-918475" is "Ready"
	I1018 18:20:38.763902  193878 pod_ready.go:86] duration metric: took 5.967887ms for pod "etcd-old-k8s-version-918475" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:20:38.766988  193878 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-918475" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:20:38.771279  193878 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-918475" is "Ready"
	I1018 18:20:38.771306  193878 pod_ready.go:86] duration metric: took 4.292297ms for pod "kube-apiserver-old-k8s-version-918475" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:20:38.774366  193878 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-918475" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:20:38.950029  193878 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-918475" is "Ready"
	I1018 18:20:38.950056  193878 pod_ready.go:86] duration metric: took 175.66748ms for pod "kube-controller-manager-old-k8s-version-918475" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:20:39.148765  193878 pod_ready.go:83] waiting for pod "kube-proxy-776dm" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:20:39.548126  193878 pod_ready.go:94] pod "kube-proxy-776dm" is "Ready"
	I1018 18:20:39.548153  193878 pod_ready.go:86] duration metric: took 399.361937ms for pod "kube-proxy-776dm" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:20:39.749071  193878 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-918475" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:20:40.148863  193878 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-918475" is "Ready"
	I1018 18:20:40.148965  193878 pod_ready.go:86] duration metric: took 399.86491ms for pod "kube-scheduler-old-k8s-version-918475" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:20:40.149009  193878 pod_ready.go:40] duration metric: took 36.415287754s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:20:40.205067  193878 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1018 18:20:40.208267  193878 out.go:203] 
	W1018 18:20:40.211175  193878 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1018 18:20:40.214032  193878 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1018 18:20:40.217005  193878 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-918475" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 18:20:36 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:36.808812303Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:20:36 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:36.821436062Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:20:36 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:36.822016049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:20:36 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:36.840869352Z" level=info msg="Created container 3392b5258cdf8bfa7e27174e4bb9b951b93b4fad5ce1d4d1606afad1f4c6d3da: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-rmgzd/dashboard-metrics-scraper" id=a67e3514-57d1-4c7a-95a0-6b856139ab70 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:20:36 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:36.843705794Z" level=info msg="Starting container: 3392b5258cdf8bfa7e27174e4bb9b951b93b4fad5ce1d4d1606afad1f4c6d3da" id=eff3aee9-c10d-4e98-bed6-463b22df75a2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:20:36 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:36.845731584Z" level=info msg="Started container" PID=1647 containerID=3392b5258cdf8bfa7e27174e4bb9b951b93b4fad5ce1d4d1606afad1f4c6d3da description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-rmgzd/dashboard-metrics-scraper id=eff3aee9-c10d-4e98-bed6-463b22df75a2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1aaaff73ac079cfbf412f43d942a472f345bb79dd1b0f6895cb2acebb44ff4d4
	Oct 18 18:20:36 old-k8s-version-918475 conmon[1645]: conmon 3392b5258cdf8bfa7e27 <ninfo>: container 1647 exited with status 1
	Oct 18 18:20:37 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:37.082909937Z" level=info msg="Removing container: 5fb89de9a0c8095a171816441d5e52f60c24997b7dcf1e941e6b0ce02c938c11" id=76eb1692-dbe6-4e7a-89ac-de1ecb9c534d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 18:20:37 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:37.095005928Z" level=info msg="Error loading conmon cgroup of container 5fb89de9a0c8095a171816441d5e52f60c24997b7dcf1e941e6b0ce02c938c11: cgroup deleted" id=76eb1692-dbe6-4e7a-89ac-de1ecb9c534d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 18:20:37 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:37.099526904Z" level=info msg="Removed container 5fb89de9a0c8095a171816441d5e52f60c24997b7dcf1e941e6b0ce02c938c11: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-rmgzd/dashboard-metrics-scraper" id=76eb1692-dbe6-4e7a-89ac-de1ecb9c534d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.91157552Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.917938596Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.91798489Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.918010818Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.921408597Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.921449303Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.921473443Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.924590587Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.924628864Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.924653529Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.927899732Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.927936746Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.927954995Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.931087662Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.931122215Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	3392b5258cdf8       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   1aaaff73ac079       dashboard-metrics-scraper-5f989dc9cf-rmgzd       kubernetes-dashboard
	cd3f22f6532cd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago      Running             storage-provisioner         2                   38e748c788982       storage-provisioner                              kube-system
	d89d2a46547ff       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   31 seconds ago      Running             kubernetes-dashboard        0                   4d66519dabe7d       kubernetes-dashboard-8694d4445c-4dr8k            kubernetes-dashboard
	37f6234e2a329       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           52 seconds ago      Running             coredns                     1                   ea34ba138c5ce       coredns-5dd5756b68-kd9bz                         kube-system
	343e3671d8d97       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   7e3e15c36d5f6       busybox                                          default
	6b83bfedf7c37       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           52 seconds ago      Running             kube-proxy                  1                   93ffc315e4694       kube-proxy-776dm                                 kube-system
	235f17d855192       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago      Exited              storage-provisioner         1                   38e748c788982       storage-provisioner                              kube-system
	788c1c65cd78f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   df2368ce940f6       kindnet-l8wgz                                    kube-system
	ae1eebdab3cf7       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           58 seconds ago      Running             kube-scheduler              1                   ccac66ee79706       kube-scheduler-old-k8s-version-918475            kube-system
	c25897e752da3       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           58 seconds ago      Running             etcd                        1                   b1381bfc7def3       etcd-old-k8s-version-918475                      kube-system
	4c59d57fbe375       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           58 seconds ago      Running             kube-controller-manager     1                   d37a45450967d       kube-controller-manager-old-k8s-version-918475   kube-system
	ccff0d24759b8       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           58 seconds ago      Running             kube-apiserver              1                   bfa6e8f6b0cd7       kube-apiserver-old-k8s-version-918475            kube-system
	
	
	==> coredns [37f6234e2a329ee90bd5cd471d64c651488562dd21c5b2e64d113322e20e47fe] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49160 - 29310 "HINFO IN 7959786435207860516.7452248654979187562. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022475749s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-918475
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-918475
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=old-k8s-version-918475
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T18_18_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 18:18:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-918475
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 18:20:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 18:20:32 +0000   Sat, 18 Oct 2025 18:18:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 18:20:32 +0000   Sat, 18 Oct 2025 18:18:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 18:20:32 +0000   Sat, 18 Oct 2025 18:18:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 18:20:32 +0000   Sat, 18 Oct 2025 18:19:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-918475
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                f524e506-54d4-439d-bba8-8edfc5d97a5b
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-5dd5756b68-kd9bz                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     108s
	  kube-system                 etcd-old-k8s-version-918475                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m1s
	  kube-system                 kindnet-l8wgz                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-old-k8s-version-918475             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-old-k8s-version-918475    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-776dm                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-old-k8s-version-918475             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-rmgzd        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-4dr8k             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-918475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-918475 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-918475 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m1s                 kubelet          Node old-k8s-version-918475 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m1s                 kubelet          Node old-k8s-version-918475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s                 kubelet          Node old-k8s-version-918475 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node old-k8s-version-918475 event: Registered Node old-k8s-version-918475 in Controller
	  Normal  NodeReady                95s                  kubelet          Node old-k8s-version-918475 status is now: NodeReady
	  Normal  Starting                 60s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)    kubelet          Node old-k8s-version-918475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)    kubelet          Node old-k8s-version-918475 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)    kubelet          Node old-k8s-version-918475 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                  node-controller  Node old-k8s-version-918475 event: Registered Node old-k8s-version-918475 in Controller
	
	
	==> dmesg <==
	[Oct18 17:53] overlayfs: idmapped layers are currently not supported
	[Oct18 17:58] overlayfs: idmapped layers are currently not supported
	[ +33.320958] overlayfs: idmapped layers are currently not supported
	[Oct18 18:00] overlayfs: idmapped layers are currently not supported
	[Oct18 18:01] overlayfs: idmapped layers are currently not supported
	[Oct18 18:02] overlayfs: idmapped layers are currently not supported
	[Oct18 18:04] overlayfs: idmapped layers are currently not supported
	[ +24.403909] overlayfs: idmapped layers are currently not supported
	[  +6.162774] overlayfs: idmapped layers are currently not supported
	[Oct18 18:05] overlayfs: idmapped layers are currently not supported
	[ +25.128760] overlayfs: idmapped layers are currently not supported
	[Oct18 18:06] overlayfs: idmapped layers are currently not supported
	[Oct18 18:07] overlayfs: idmapped layers are currently not supported
	[Oct18 18:08] overlayfs: idmapped layers are currently not supported
	[Oct18 18:09] overlayfs: idmapped layers are currently not supported
	[Oct18 18:11] overlayfs: idmapped layers are currently not supported
	[Oct18 18:13] overlayfs: idmapped layers are currently not supported
	[ +30.969240] overlayfs: idmapped layers are currently not supported
	[Oct18 18:15] overlayfs: idmapped layers are currently not supported
	[Oct18 18:16] overlayfs: idmapped layers are currently not supported
	[Oct18 18:17] overlayfs: idmapped layers are currently not supported
	[ +23.167826] overlayfs: idmapped layers are currently not supported
	[Oct18 18:18] overlayfs: idmapped layers are currently not supported
	[ +38.509809] overlayfs: idmapped layers are currently not supported
	[Oct18 18:19] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c25897e752da3dc12951f02ee89f3ec475fb055cb90f27cb1ccc4cde1fc8c6de] <==
	{"level":"info","ts":"2025-10-18T18:19:56.916293Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-18T18:19:56.916317Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T18:19:56.916491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-18T18:19:56.916539Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-18T18:19:56.916615Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T18:19:56.916652Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T18:19:56.916764Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T18:19:56.91681Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T18:19:56.91682Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T18:19:56.921257Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T18:19:56.921285Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T18:19:57.976818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-18T18:19:57.976968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-18T18:19:57.977037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-18T18:19:57.977076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-18T18:19:57.977106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-18T18:19:57.97714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-18T18:19:57.97717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-18T18:19:57.981282Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T18:19:57.982345Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-18T18:19:57.982731Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T18:19:57.983578Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-18T18:19:57.98125Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-918475 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T18:19:58.014609Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T18:19:58.014705Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:20:55 up  2:03,  0 user,  load average: 2.25, 3.03, 2.71
	Linux old-k8s-version-918475 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [788c1c65cd78f7ed26e15732fc3c949da8652d1331bb7a89fd1b2fa40c67386f] <==
	I1018 18:20:02.717572       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 18:20:02.717813       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 18:20:02.717945       1 main.go:148] setting mtu 1500 for CNI 
	I1018 18:20:02.717957       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 18:20:02.717967       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T18:20:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 18:20:02.908824       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 18:20:02.908842       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 18:20:02.908851       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 18:20:02.909187       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 18:20:32.909488       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 18:20:32.909495       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 18:20:32.909635       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 18:20:32.909612       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 18:20:34.409293       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 18:20:34.409327       1 metrics.go:72] Registering metrics
	I1018 18:20:34.409404       1 controller.go:711] "Syncing nftables rules"
	I1018 18:20:42.911240       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 18:20:42.911280       1 main.go:301] handling current node
	I1018 18:20:52.914576       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 18:20:52.914611       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ccff0d24759b80bcd65a9894c590246579f8cca877d5c90bf983408a0b729bb9] <==
	I1018 18:20:01.197833       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 18:20:01.491860       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1018 18:20:01.582170       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1018 18:20:01.587159       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 18:20:01.598352       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1018 18:20:01.598386       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1018 18:20:01.598372       1 shared_informer.go:318] Caches are synced for configmaps
	I1018 18:20:01.598518       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 18:20:01.598622       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1018 18:20:01.599507       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1018 18:20:01.599646       1 aggregator.go:166] initial CRD sync complete...
	I1018 18:20:01.599686       1 autoregister_controller.go:141] Starting autoregister controller
	I1018 18:20:01.599715       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 18:20:01.599741       1 cache.go:39] Caches are synced for autoregister controller
	I1018 18:20:02.200789       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 18:20:03.483494       1 controller.go:624] quota admission added evaluator for: namespaces
	I1018 18:20:03.528994       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1018 18:20:03.557112       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 18:20:03.567920       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 18:20:03.580660       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1018 18:20:03.634475       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.179.20"}
	I1018 18:20:03.651763       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.104.172"}
	I1018 18:20:14.863060       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1018 18:20:14.930013       1 controller.go:624] quota admission added evaluator for: endpoints
	I1018 18:20:14.980645       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4c59d57fbe375bf1c3be0746a92ca4d85fdf6a06d96f6437faeb7e9324c89a9b] <==
	I1018 18:20:14.878922       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1018 18:20:14.893389       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-4dr8k"
	I1018 18:20:14.894734       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-rmgzd"
	I1018 18:20:14.922641       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="44.262767ms"
	I1018 18:20:14.924370       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.605316ms"
	I1018 18:20:14.947560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="24.778244ms"
	I1018 18:20:14.947907       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="157.401µs"
	I1018 18:20:14.975443       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.033586ms"
	I1018 18:20:14.992019       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="132.712µs"
	I1018 18:20:14.997375       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1018 18:20:14.997851       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1018 18:20:15.004347       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 18:20:15.004621       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="29.046935ms"
	I1018 18:20:15.004751       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.524µs"
	I1018 18:20:15.040599       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 18:20:15.040696       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1018 18:20:20.037190       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="301.575µs"
	I1018 18:20:21.042594       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.471µs"
	I1018 18:20:22.056729       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="101.055µs"
	I1018 18:20:25.076407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.698516ms"
	I1018 18:20:25.077877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="48.871µs"
	I1018 18:20:37.101004       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="63.352µs"
	I1018 18:20:38.296721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.328668ms"
	I1018 18:20:38.297009       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.768µs"
	I1018 18:20:45.276299       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.624µs"
	
	
	==> kube-proxy [6b83bfedf7c37fc9bf7f3d03db7cee37209be54754656efb059c09c8f2eb2ceb] <==
	I1018 18:20:02.822903       1 server_others.go:69] "Using iptables proxy"
	I1018 18:20:02.845714       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1018 18:20:02.903775       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 18:20:02.911518       1 server_others.go:152] "Using iptables Proxier"
	I1018 18:20:02.911555       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1018 18:20:02.911564       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1018 18:20:02.911595       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1018 18:20:02.911796       1 server.go:846] "Version info" version="v1.28.0"
	I1018 18:20:02.911806       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:20:02.913064       1 config.go:188] "Starting service config controller"
	I1018 18:20:02.913076       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1018 18:20:02.913097       1 config.go:97] "Starting endpoint slice config controller"
	I1018 18:20:02.913101       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1018 18:20:02.921034       1 config.go:315] "Starting node config controller"
	I1018 18:20:02.921056       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1018 18:20:03.024869       1 shared_informer.go:318] Caches are synced for node config
	I1018 18:20:03.024915       1 shared_informer.go:318] Caches are synced for service config
	I1018 18:20:03.025009       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ae1eebdab3cf71a07bd4eae6a705ba7ff86c020ba58671cfcc9759010c46c239] <==
	W1018 18:20:01.490060       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1018 18:20:01.490085       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1018 18:20:01.490181       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1018 18:20:01.490199       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1018 18:20:01.490236       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1018 18:20:01.490297       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1018 18:20:01.490253       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1018 18:20:01.490379       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1018 18:20:01.490360       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1018 18:20:01.490454       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1018 18:20:01.490431       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1018 18:20:01.490515       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1018 18:20:01.490546       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1018 18:20:01.490525       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1018 18:20:01.490589       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1018 18:20:01.490620       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1018 18:20:01.490664       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1018 18:20:01.490680       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1018 18:20:01.490747       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1018 18:20:01.490791       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1018 18:20:01.490755       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1018 18:20:01.490861       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1018 18:20:01.490825       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1018 18:20:01.490920       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1018 18:20:03.142143       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 18:20:15 old-k8s-version-918475 kubelet[776]: I1018 18:20:15.068979     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmgfr\" (UniqueName: \"kubernetes.io/projected/7a344bb7-dbef-407e-a17b-95ee3212304e-kube-api-access-xmgfr\") pod \"kubernetes-dashboard-8694d4445c-4dr8k\" (UID: \"7a344bb7-dbef-407e-a17b-95ee3212304e\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4dr8k"
	Oct 18 18:20:15 old-k8s-version-918475 kubelet[776]: I1018 18:20:15.069159     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/bacb32f2-b820-4279-a362-120e4c43e038-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-rmgzd\" (UID: \"bacb32f2-b820-4279-a362-120e4c43e038\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-rmgzd"
	Oct 18 18:20:15 old-k8s-version-918475 kubelet[776]: I1018 18:20:15.069221     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2vtn\" (UniqueName: \"kubernetes.io/projected/bacb32f2-b820-4279-a362-120e4c43e038-kube-api-access-v2vtn\") pod \"dashboard-metrics-scraper-5f989dc9cf-rmgzd\" (UID: \"bacb32f2-b820-4279-a362-120e4c43e038\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-rmgzd"
	Oct 18 18:20:15 old-k8s-version-918475 kubelet[776]: W1018 18:20:15.255157     776 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6/crio-1aaaff73ac079cfbf412f43d942a472f345bb79dd1b0f6895cb2acebb44ff4d4 WatchSource:0}: Error finding container 1aaaff73ac079cfbf412f43d942a472f345bb79dd1b0f6895cb2acebb44ff4d4: Status 404 returned error can't find the container with id 1aaaff73ac079cfbf412f43d942a472f345bb79dd1b0f6895cb2acebb44ff4d4
	Oct 18 18:20:15 old-k8s-version-918475 kubelet[776]: W1018 18:20:15.260153     776 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6/crio-4d66519dabe7d4e152ca7dec224dd41115e4b2b93a1c9d52e43736a63b94135a WatchSource:0}: Error finding container 4d66519dabe7d4e152ca7dec224dd41115e4b2b93a1c9d52e43736a63b94135a: Status 404 returned error can't find the container with id 4d66519dabe7d4e152ca7dec224dd41115e4b2b93a1c9d52e43736a63b94135a
	Oct 18 18:20:20 old-k8s-version-918475 kubelet[776]: I1018 18:20:20.017760     776 scope.go:117] "RemoveContainer" containerID="79957774176cb644567f0a3c2ddaf95eace2bc647525f5d2d1babc61bd7ec58d"
	Oct 18 18:20:21 old-k8s-version-918475 kubelet[776]: I1018 18:20:21.023004     776 scope.go:117] "RemoveContainer" containerID="79957774176cb644567f0a3c2ddaf95eace2bc647525f5d2d1babc61bd7ec58d"
	Oct 18 18:20:21 old-k8s-version-918475 kubelet[776]: I1018 18:20:21.023293     776 scope.go:117] "RemoveContainer" containerID="5fb89de9a0c8095a171816441d5e52f60c24997b7dcf1e941e6b0ce02c938c11"
	Oct 18 18:20:21 old-k8s-version-918475 kubelet[776]: E1018 18:20:21.026524     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-rmgzd_kubernetes-dashboard(bacb32f2-b820-4279-a362-120e4c43e038)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-rmgzd" podUID="bacb32f2-b820-4279-a362-120e4c43e038"
	Oct 18 18:20:22 old-k8s-version-918475 kubelet[776]: I1018 18:20:22.031719     776 scope.go:117] "RemoveContainer" containerID="5fb89de9a0c8095a171816441d5e52f60c24997b7dcf1e941e6b0ce02c938c11"
	Oct 18 18:20:22 old-k8s-version-918475 kubelet[776]: E1018 18:20:22.032117     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-rmgzd_kubernetes-dashboard(bacb32f2-b820-4279-a362-120e4c43e038)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-rmgzd" podUID="bacb32f2-b820-4279-a362-120e4c43e038"
	Oct 18 18:20:25 old-k8s-version-918475 kubelet[776]: I1018 18:20:25.220915     776 scope.go:117] "RemoveContainer" containerID="5fb89de9a0c8095a171816441d5e52f60c24997b7dcf1e941e6b0ce02c938c11"
	Oct 18 18:20:25 old-k8s-version-918475 kubelet[776]: E1018 18:20:25.221284     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-rmgzd_kubernetes-dashboard(bacb32f2-b820-4279-a362-120e4c43e038)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-rmgzd" podUID="bacb32f2-b820-4279-a362-120e4c43e038"
	Oct 18 18:20:33 old-k8s-version-918475 kubelet[776]: I1018 18:20:33.067562     776 scope.go:117] "RemoveContainer" containerID="235f17d855192310ccb1a489b3d0c7f7ebbad52420790a930e349f341c3e8d8f"
	Oct 18 18:20:33 old-k8s-version-918475 kubelet[776]: I1018 18:20:33.091608     776 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4dr8k" podStartSLOduration=10.325332492 podCreationTimestamp="2025-10-18 18:20:14 +0000 UTC" firstStartedPulling="2025-10-18 18:20:15.266001372 +0000 UTC m=+19.602675859" lastFinishedPulling="2025-10-18 18:20:24.032214434 +0000 UTC m=+28.368888920" observedRunningTime="2025-10-18 18:20:25.062536464 +0000 UTC m=+29.399210959" watchObservedRunningTime="2025-10-18 18:20:33.091545553 +0000 UTC m=+37.428220048"
	Oct 18 18:20:36 old-k8s-version-918475 kubelet[776]: I1018 18:20:36.805621     776 scope.go:117] "RemoveContainer" containerID="5fb89de9a0c8095a171816441d5e52f60c24997b7dcf1e941e6b0ce02c938c11"
	Oct 18 18:20:37 old-k8s-version-918475 kubelet[776]: I1018 18:20:37.080553     776 scope.go:117] "RemoveContainer" containerID="5fb89de9a0c8095a171816441d5e52f60c24997b7dcf1e941e6b0ce02c938c11"
	Oct 18 18:20:37 old-k8s-version-918475 kubelet[776]: I1018 18:20:37.080855     776 scope.go:117] "RemoveContainer" containerID="3392b5258cdf8bfa7e27174e4bb9b951b93b4fad5ce1d4d1606afad1f4c6d3da"
	Oct 18 18:20:37 old-k8s-version-918475 kubelet[776]: E1018 18:20:37.081178     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-rmgzd_kubernetes-dashboard(bacb32f2-b820-4279-a362-120e4c43e038)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-rmgzd" podUID="bacb32f2-b820-4279-a362-120e4c43e038"
	Oct 18 18:20:45 old-k8s-version-918475 kubelet[776]: I1018 18:20:45.237713     776 scope.go:117] "RemoveContainer" containerID="3392b5258cdf8bfa7e27174e4bb9b951b93b4fad5ce1d4d1606afad1f4c6d3da"
	Oct 18 18:20:45 old-k8s-version-918475 kubelet[776]: E1018 18:20:45.238661     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-rmgzd_kubernetes-dashboard(bacb32f2-b820-4279-a362-120e4c43e038)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-rmgzd" podUID="bacb32f2-b820-4279-a362-120e4c43e038"
	Oct 18 18:20:52 old-k8s-version-918475 kubelet[776]: I1018 18:20:52.540118     776 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 18 18:20:52 old-k8s-version-918475 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 18:20:52 old-k8s-version-918475 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 18:20:52 old-k8s-version-918475 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [d89d2a46547ff61a18edd8b04922f7e9a116c1917c95fdc49f17527d20b9e15e] <==
	2025/10/18 18:20:24 Using namespace: kubernetes-dashboard
	2025/10/18 18:20:24 Using in-cluster config to connect to apiserver
	2025/10/18 18:20:24 Using secret token for csrf signing
	2025/10/18 18:20:24 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 18:20:24 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 18:20:24 Successful initial request to the apiserver, version: v1.28.0
	2025/10/18 18:20:24 Generating JWE encryption key
	2025/10/18 18:20:24 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 18:20:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 18:20:24 Initializing JWE encryption key from synchronized object
	2025/10/18 18:20:24 Creating in-cluster Sidecar client
	2025/10/18 18:20:24 Serving insecurely on HTTP port: 9090
	2025/10/18 18:20:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 18:20:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 18:20:24 Starting overwatch
	
	
	==> storage-provisioner [235f17d855192310ccb1a489b3d0c7f7ebbad52420790a930e349f341c3e8d8f] <==
	I1018 18:20:02.671974       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 18:20:32.674698       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [cd3f22f6532cd26d3f39cbbc7521c9d8a7a712934dd47081a9ad7584afc64c38] <==
	I1018 18:20:33.116868       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 18:20:33.130581       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 18:20:33.130627       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1018 18:20:50.530883       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 18:20:50.531460       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"50769fb3-6713-46ea-856e-a4e705d84615", APIVersion:"v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-918475_1e70752d-57bb-422f-bb0a-538fa8bcc735 became leader
	I1018 18:20:50.531725       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-918475_1e70752d-57bb-422f-bb0a-538fa8bcc735!
	I1018 18:20:50.632650       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-918475_1e70752d-57bb-422f-bb0a-538fa8bcc735!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-918475 -n old-k8s-version-918475
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-918475 -n old-k8s-version-918475: exit status 2 (362.980545ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-918475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-918475
helpers_test.go:243: (dbg) docker inspect old-k8s-version-918475:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6",
	        "Created": "2025-10-18T18:18:25.775142041Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 194009,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T18:19:48.922861232Z",
	            "FinishedAt": "2025-10-18T18:19:48.122280911Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6/hostname",
	        "HostsPath": "/var/lib/docker/containers/13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6/hosts",
	        "LogPath": "/var/lib/docker/containers/13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6/13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6-json.log",
	        "Name": "/old-k8s-version-918475",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-918475:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-918475",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6",
	                "LowerDir": "/var/lib/docker/overlay2/3cbaaca74a96e66e2281894a8ded9a8b4932ecc5b1eaa08dd2c608cf2a8fb5aa-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3cbaaca74a96e66e2281894a8ded9a8b4932ecc5b1eaa08dd2c608cf2a8fb5aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3cbaaca74a96e66e2281894a8ded9a8b4932ecc5b1eaa08dd2c608cf2a8fb5aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3cbaaca74a96e66e2281894a8ded9a8b4932ecc5b1eaa08dd2c608cf2a8fb5aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-918475",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-918475/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-918475",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-918475",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-918475",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c0b0a14b29f981955e7560c774ae2a0df30edf2afdaad0443d92b82ad128c683",
	            "SandboxKey": "/var/run/docker/netns/c0b0a14b29f9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-918475": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:97:f6:bc:a2:3c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2f21c2763fceb3911d220e045f4c363e42b3b9b9b29b62d56c07c23b82cc830b",
	                    "EndpointID": "572525a6cf4544c4163ac9f23cbe971eb1a12786d5a30a6aea660308907d2e4c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-918475",
	                        "13ab62783a42"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-918475 -n old-k8s-version-918475
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-918475 -n old-k8s-version-918475: exit status 2 (361.820973ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-918475 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-918475 logs -n 25: (1.283993404s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-111074 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo containerd config dump                                                                                                                                                                                                  │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ ssh     │ -p cilium-111074 sudo crio config                                                                                                                                                                                                             │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ delete  │ -p cilium-111074                                                                                                                                                                                                                              │ cilium-111074            │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │ 18 Oct 25 18:16 UTC │
	│ start   │ -p force-systemd-env-785999 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-785999 │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │ 18 Oct 25 18:17 UTC │
	│ pause   │ -p pause-321903 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-321903             │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │                     │
	│ delete  │ -p pause-321903                                                                                                                                                                                                                               │ pause-321903             │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │ 18 Oct 25 18:17 UTC │
	│ start   │ -p cert-expiration-463770 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-463770   │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │ 18 Oct 25 18:18 UTC │
	│ delete  │ -p force-systemd-env-785999                                                                                                                                                                                                                   │ force-systemd-env-785999 │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │ 18 Oct 25 18:17 UTC │
	│ start   │ -p cert-options-327418 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-327418      │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │ 18 Oct 25 18:18 UTC │
	│ ssh     │ cert-options-327418 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-327418      │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:18 UTC │
	│ ssh     │ -p cert-options-327418 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-327418      │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:18 UTC │
	│ delete  │ -p cert-options-327418                                                                                                                                                                                                                        │ cert-options-327418      │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:18 UTC │
	│ start   │ -p old-k8s-version-918475 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-918475   │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:19 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-918475 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-918475   │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │                     │
	│ stop    │ -p old-k8s-version-918475 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-918475   │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │ 18 Oct 25 18:19 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-918475 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-918475   │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │ 18 Oct 25 18:19 UTC │
	│ start   │ -p old-k8s-version-918475 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-918475   │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │ 18 Oct 25 18:20 UTC │
	│ image   │ old-k8s-version-918475 image list --format=json                                                                                                                                                                                               │ old-k8s-version-918475   │ jenkins │ v1.37.0 │ 18 Oct 25 18:20 UTC │ 18 Oct 25 18:20 UTC │
	│ pause   │ -p old-k8s-version-918475 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-918475   │ jenkins │ v1.37.0 │ 18 Oct 25 18:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 18:19:48
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 18:19:48.652019  193878 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:19:48.652149  193878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:19:48.652159  193878 out.go:374] Setting ErrFile to fd 2...
	I1018 18:19:48.652163  193878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:19:48.652417  193878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:19:48.652853  193878 out.go:368] Setting JSON to false
	I1018 18:19:48.653906  193878 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7338,"bootTime":1760804251,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 18:19:48.653979  193878 start.go:141] virtualization:  
	I1018 18:19:48.656966  193878 out.go:179] * [old-k8s-version-918475] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 18:19:48.660884  193878 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 18:19:48.660915  193878 notify.go:220] Checking for updates...
	I1018 18:19:48.663842  193878 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 18:19:48.666993  193878 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:19:48.669960  193878 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 18:19:48.673081  193878 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 18:19:48.675953  193878 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 18:19:48.679417  193878 config.go:182] Loaded profile config "old-k8s-version-918475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 18:19:48.682926  193878 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1018 18:19:48.685814  193878 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 18:19:48.723526  193878 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 18:19:48.723647  193878 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:19:48.777653  193878 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 18:19:48.767934557 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:19:48.777755  193878 docker.go:318] overlay module found
	I1018 18:19:48.781004  193878 out.go:179] * Using the docker driver based on existing profile
	I1018 18:19:48.783877  193878 start.go:305] selected driver: docker
	I1018 18:19:48.783893  193878 start.go:925] validating driver "docker" against &{Name:old-k8s-version-918475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-918475 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:19:48.783982  193878 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 18:19:48.784731  193878 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:19:48.836456  193878 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 18:19:48.823372055 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:19:48.836799  193878 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:19:48.836827  193878 cni.go:84] Creating CNI manager for ""
	I1018 18:19:48.836884  193878 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:19:48.836927  193878 start.go:349] cluster config:
	{Name:old-k8s-version-918475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-918475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:19:48.840367  193878 out.go:179] * Starting "old-k8s-version-918475" primary control-plane node in "old-k8s-version-918475" cluster
	I1018 18:19:48.843205  193878 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 18:19:48.846551  193878 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 18:19:48.849573  193878 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 18:19:48.849633  193878 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1018 18:19:48.849648  193878 cache.go:58] Caching tarball of preloaded images
	I1018 18:19:48.849660  193878 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 18:19:48.849730  193878 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 18:19:48.849740  193878 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1018 18:19:48.849843  193878 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/config.json ...
	I1018 18:19:48.868734  193878 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 18:19:48.868757  193878 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 18:19:48.868776  193878 cache.go:232] Successfully downloaded all kic artifacts
	I1018 18:19:48.868800  193878 start.go:360] acquireMachinesLock for old-k8s-version-918475: {Name:mke4efc3cc1fc03dd6efc3fd3e060d8181392707 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:19:48.868868  193878 start.go:364] duration metric: took 45.137µs to acquireMachinesLock for "old-k8s-version-918475"
	I1018 18:19:48.868891  193878 start.go:96] Skipping create...Using existing machine configuration
	I1018 18:19:48.868903  193878 fix.go:54] fixHost starting: 
	I1018 18:19:48.869199  193878 cli_runner.go:164] Run: docker container inspect old-k8s-version-918475 --format={{.State.Status}}
	I1018 18:19:48.886120  193878 fix.go:112] recreateIfNeeded on old-k8s-version-918475: state=Stopped err=<nil>
	W1018 18:19:48.886149  193878 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 18:19:48.889385  193878 out.go:252] * Restarting existing docker container for "old-k8s-version-918475" ...
	I1018 18:19:48.889464  193878 cli_runner.go:164] Run: docker start old-k8s-version-918475
	I1018 18:19:49.151152  193878 cli_runner.go:164] Run: docker container inspect old-k8s-version-918475 --format={{.State.Status}}
	I1018 18:19:49.175402  193878 kic.go:430] container "old-k8s-version-918475" state is running.
	I1018 18:19:49.176743  193878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-918475
	I1018 18:19:49.198777  193878 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/config.json ...
	I1018 18:19:49.199000  193878 machine.go:93] provisionDockerMachine start ...
	I1018 18:19:49.199056  193878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:49.221834  193878 main.go:141] libmachine: Using SSH client type: native
	I1018 18:19:49.222160  193878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1018 18:19:49.222174  193878 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 18:19:49.222785  193878 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 18:19:52.376652  193878 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-918475
	
	I1018 18:19:52.376677  193878 ubuntu.go:182] provisioning hostname "old-k8s-version-918475"
	I1018 18:19:52.376750  193878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:52.394298  193878 main.go:141] libmachine: Using SSH client type: native
	I1018 18:19:52.394660  193878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1018 18:19:52.394679  193878 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-918475 && echo "old-k8s-version-918475" | sudo tee /etc/hostname
	I1018 18:19:52.551050  193878 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-918475
	
	I1018 18:19:52.551168  193878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:52.568908  193878 main.go:141] libmachine: Using SSH client type: native
	I1018 18:19:52.569271  193878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1018 18:19:52.569297  193878 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-918475' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-918475/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-918475' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 18:19:52.728974  193878 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 18:19:52.728999  193878 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 18:19:52.729029  193878 ubuntu.go:190] setting up certificates
	I1018 18:19:52.729040  193878 provision.go:84] configureAuth start
	I1018 18:19:52.729109  193878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-918475
	I1018 18:19:52.747396  193878 provision.go:143] copyHostCerts
	I1018 18:19:52.747468  193878 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 18:19:52.747486  193878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 18:19:52.747564  193878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 18:19:52.747676  193878 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 18:19:52.747687  193878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 18:19:52.747720  193878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 18:19:52.747789  193878 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 18:19:52.747798  193878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 18:19:52.747823  193878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 18:19:52.747883  193878 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-918475 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-918475]
	I1018 18:19:53.218890  193878 provision.go:177] copyRemoteCerts
	I1018 18:19:53.218956  193878 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 18:19:53.218995  193878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:53.237300  193878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa Username:docker}
	I1018 18:19:53.345447  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 18:19:53.363923  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1018 18:19:53.380909  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 18:19:53.398749  193878 provision.go:87] duration metric: took 669.682007ms to configureAuth
	I1018 18:19:53.398777  193878 ubuntu.go:206] setting minikube options for container-runtime
	I1018 18:19:53.398963  193878 config.go:182] Loaded profile config "old-k8s-version-918475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 18:19:53.399076  193878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:53.416010  193878 main.go:141] libmachine: Using SSH client type: native
	I1018 18:19:53.416313  193878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1018 18:19:53.416333  193878 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 18:19:53.727261  193878 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 18:19:53.727327  193878 machine.go:96] duration metric: took 4.528317466s to provisionDockerMachine
	I1018 18:19:53.727350  193878 start.go:293] postStartSetup for "old-k8s-version-918475" (driver="docker")
	I1018 18:19:53.727375  193878 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 18:19:53.727486  193878 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 18:19:53.727568  193878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:53.754868  193878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa Username:docker}
	I1018 18:19:53.861028  193878 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 18:19:53.864441  193878 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 18:19:53.864469  193878 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 18:19:53.864481  193878 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 18:19:53.864543  193878 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 18:19:53.864630  193878 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 18:19:53.864752  193878 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 18:19:53.872357  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:19:53.890538  193878 start.go:296] duration metric: took 163.159117ms for postStartSetup
	I1018 18:19:53.890633  193878 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 18:19:53.890678  193878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:53.910117  193878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa Username:docker}
	I1018 18:19:54.011279  193878 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 18:19:54.016503  193878 fix.go:56] duration metric: took 5.147592693s for fixHost
	I1018 18:19:54.016530  193878 start.go:83] releasing machines lock for "old-k8s-version-918475", held for 5.147650138s
	I1018 18:19:54.016596  193878 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-918475
	I1018 18:19:54.034209  193878 ssh_runner.go:195] Run: cat /version.json
	I1018 18:19:54.034266  193878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:54.034360  193878 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 18:19:54.034421  193878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:54.056390  193878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa Username:docker}
	I1018 18:19:54.066219  193878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa Username:docker}
	I1018 18:19:54.156759  193878 ssh_runner.go:195] Run: systemctl --version
	I1018 18:19:54.248077  193878 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 18:19:54.287388  193878 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 18:19:54.291592  193878 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 18:19:54.291690  193878 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 18:19:54.299350  193878 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 18:19:54.299371  193878 start.go:495] detecting cgroup driver to use...
	I1018 18:19:54.299402  193878 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 18:19:54.299448  193878 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 18:19:54.320516  193878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 18:19:54.333345  193878 docker.go:218] disabling cri-docker service (if available) ...
	I1018 18:19:54.333415  193878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 18:19:54.349725  193878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 18:19:54.362999  193878 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 18:19:54.475585  193878 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 18:19:54.594260  193878 docker.go:234] disabling docker service ...
	I1018 18:19:54.594338  193878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 18:19:54.610726  193878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 18:19:54.624822  193878 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 18:19:54.734025  193878 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 18:19:54.846123  193878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 18:19:54.858801  193878 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 18:19:54.874149  193878 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1018 18:19:54.874215  193878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:19:54.883739  193878 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 18:19:54.883803  193878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:19:54.893336  193878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:19:54.902624  193878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:19:54.912145  193878 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 18:19:54.922144  193878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:19:54.931114  193878 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:19:54.939540  193878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:19:54.948390  193878 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 18:19:54.955769  193878 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 18:19:54.965751  193878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:19:55.083437  193878 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 18:19:55.213108  193878 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 18:19:55.213194  193878 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 18:19:55.217031  193878 start.go:563] Will wait 60s for crictl version
	I1018 18:19:55.217093  193878 ssh_runner.go:195] Run: which crictl
	I1018 18:19:55.220525  193878 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 18:19:55.245213  193878 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 18:19:55.245294  193878 ssh_runner.go:195] Run: crio --version
	I1018 18:19:55.277688  193878 ssh_runner.go:195] Run: crio --version
	I1018 18:19:55.308375  193878 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1018 18:19:55.311498  193878 cli_runner.go:164] Run: docker network inspect old-k8s-version-918475 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:19:55.328252  193878 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 18:19:55.331964  193878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:19:55.342037  193878 kubeadm.go:883] updating cluster {Name:old-k8s-version-918475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-918475 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 18:19:55.342176  193878 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 18:19:55.342238  193878 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:19:55.373820  193878 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:19:55.373845  193878 crio.go:433] Images already preloaded, skipping extraction
	I1018 18:19:55.373899  193878 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:19:55.402294  193878 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:19:55.402317  193878 cache_images.go:85] Images are preloaded, skipping loading
	I1018 18:19:55.402324  193878 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1018 18:19:55.402420  193878 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-918475 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-918475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 18:19:55.402500  193878 ssh_runner.go:195] Run: crio config
	I1018 18:19:55.454891  193878 cni.go:84] Creating CNI manager for ""
	I1018 18:19:55.454974  193878 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:19:55.455013  193878 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 18:19:55.455067  193878 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-918475 NodeName:old-k8s-version-918475 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 18:19:55.455258  193878 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-918475"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 18:19:55.455384  193878 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1018 18:19:55.463050  193878 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 18:19:55.463113  193878 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 18:19:55.470173  193878 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1018 18:19:55.482447  193878 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 18:19:55.494861  193878 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1018 18:19:55.507835  193878 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 18:19:55.512607  193878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:19:55.522846  193878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:19:55.644233  193878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:19:55.662524  193878 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475 for IP: 192.168.76.2
	I1018 18:19:55.662546  193878 certs.go:195] generating shared ca certs ...
	I1018 18:19:55.662562  193878 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:19:55.662707  193878 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 18:19:55.662758  193878 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 18:19:55.662769  193878 certs.go:257] generating profile certs ...
	I1018 18:19:55.662847  193878 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/client.key
	I1018 18:19:55.662917  193878 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/apiserver.key.630d08a5
	I1018 18:19:55.662958  193878 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/proxy-client.key
	I1018 18:19:55.663067  193878 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 18:19:55.663095  193878 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 18:19:55.663110  193878 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 18:19:55.663140  193878 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 18:19:55.663165  193878 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 18:19:55.663189  193878 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 18:19:55.663240  193878 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:19:55.663825  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 18:19:55.697908  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 18:19:55.719034  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 18:19:55.742892  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 18:19:55.763382  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 18:19:55.782703  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 18:19:55.802998  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 18:19:55.842878  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 18:19:55.871374  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 18:19:55.891907  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 18:19:55.917013  193878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 18:19:55.937902  193878 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 18:19:55.952126  193878 ssh_runner.go:195] Run: openssl version
	I1018 18:19:55.958684  193878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 18:19:55.967593  193878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 18:19:55.971503  193878 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 18:19:55.971606  193878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 18:19:56.022485  193878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 18:19:56.030397  193878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 18:19:56.038577  193878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 18:19:56.042588  193878 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 18:19:56.042702  193878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 18:19:56.083842  193878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 18:19:56.091889  193878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 18:19:56.099951  193878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:19:56.103711  193878 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:19:56.103776  193878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:19:56.147337  193878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 18:19:56.157492  193878 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 18:19:56.161475  193878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 18:19:56.202766  193878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 18:19:56.245388  193878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 18:19:56.298040  193878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 18:19:56.382699  193878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 18:19:56.453670  193878 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 18:19:56.533698  193878 kubeadm.go:400] StartCluster: {Name:old-k8s-version-918475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-918475 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:19:56.533796  193878 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 18:19:56.533885  193878 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 18:19:56.583189  193878 cri.go:89] found id: "ae1eebdab3cf71a07bd4eae6a705ba7ff86c020ba58671cfcc9759010c46c239"
	I1018 18:19:56.583226  193878 cri.go:89] found id: "c25897e752da3dc12951f02ee89f3ec475fb055cb90f27cb1ccc4cde1fc8c6de"
	I1018 18:19:56.583231  193878 cri.go:89] found id: "4c59d57fbe375bf1c3be0746a92ca4d85fdf6a06d96f6437faeb7e9324c89a9b"
	I1018 18:19:56.583236  193878 cri.go:89] found id: "ccff0d24759b80bcd65a9894c590246579f8cca877d5c90bf983408a0b729bb9"
	I1018 18:19:56.583242  193878 cri.go:89] found id: ""
	I1018 18:19:56.583304  193878 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 18:19:56.600536  193878 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:19:56Z" level=error msg="open /run/runc: no such file or directory"
	I1018 18:19:56.600652  193878 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 18:19:56.613597  193878 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 18:19:56.613619  193878 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 18:19:56.613681  193878 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 18:19:56.626893  193878 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 18:19:56.627560  193878 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-918475" does not appear in /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:19:56.627846  193878 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-2509/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-918475" cluster setting kubeconfig missing "old-k8s-version-918475" context setting]
	I1018 18:19:56.629157  193878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:19:56.630799  193878 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 18:19:56.641565  193878 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 18:19:56.641605  193878 kubeadm.go:601] duration metric: took 27.980451ms to restartPrimaryControlPlane
	I1018 18:19:56.641617  193878 kubeadm.go:402] duration metric: took 107.927964ms to StartCluster
	I1018 18:19:56.641646  193878 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:19:56.641718  193878 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:19:56.642843  193878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:19:56.643120  193878 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 18:19:56.643478  193878 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 18:19:56.643574  193878 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-918475"
	I1018 18:19:56.643590  193878 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-918475"
	W1018 18:19:56.643596  193878 addons.go:247] addon storage-provisioner should already be in state true
	I1018 18:19:56.643620  193878 host.go:66] Checking if "old-k8s-version-918475" exists ...
	I1018 18:19:56.643620  193878 addons.go:69] Setting dashboard=true in profile "old-k8s-version-918475"
	I1018 18:19:56.643635  193878 addons.go:238] Setting addon dashboard=true in "old-k8s-version-918475"
	W1018 18:19:56.643641  193878 addons.go:247] addon dashboard should already be in state true
	I1018 18:19:56.643661  193878 host.go:66] Checking if "old-k8s-version-918475" exists ...
	I1018 18:19:56.644086  193878 cli_runner.go:164] Run: docker container inspect old-k8s-version-918475 --format={{.State.Status}}
	I1018 18:19:56.644167  193878 cli_runner.go:164] Run: docker container inspect old-k8s-version-918475 --format={{.State.Status}}
	I1018 18:19:56.644486  193878 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-918475"
	I1018 18:19:56.644510  193878 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-918475"
	I1018 18:19:56.644801  193878 cli_runner.go:164] Run: docker container inspect old-k8s-version-918475 --format={{.State.Status}}
	I1018 18:19:56.643552  193878 config.go:182] Loaded profile config "old-k8s-version-918475": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 18:19:56.653006  193878 out.go:179] * Verifying Kubernetes components...
	I1018 18:19:56.656210  193878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:19:56.708843  193878 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:19:56.711915  193878 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:19:56.711977  193878 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 18:19:56.712158  193878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:56.713144  193878 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-918475"
	W1018 18:19:56.713174  193878 addons.go:247] addon default-storageclass should already be in state true
	I1018 18:19:56.713203  193878 host.go:66] Checking if "old-k8s-version-918475" exists ...
	I1018 18:19:56.713677  193878 cli_runner.go:164] Run: docker container inspect old-k8s-version-918475 --format={{.State.Status}}
	I1018 18:19:56.719231  193878 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 18:19:56.722141  193878 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 18:19:56.729073  193878 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 18:19:56.729101  193878 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 18:19:56.729177  193878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:56.768196  193878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa Username:docker}
	I1018 18:19:56.775064  193878 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 18:19:56.775083  193878 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 18:19:56.775142  193878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-918475
	I1018 18:19:56.807070  193878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa Username:docker}
	I1018 18:19:56.815428  193878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/old-k8s-version-918475/id_rsa Username:docker}
	I1018 18:19:57.003625  193878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:19:57.027990  193878 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-918475" to be "Ready" ...
	I1018 18:19:57.041749  193878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:19:57.133289  193878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 18:19:57.158380  193878 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 18:19:57.158459  193878 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 18:19:57.227507  193878 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 18:19:57.227577  193878 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 18:19:57.285024  193878 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 18:19:57.285100  193878 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 18:19:57.341298  193878 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 18:19:57.341369  193878 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 18:19:57.376542  193878 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 18:19:57.376617  193878 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 18:19:57.404906  193878 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 18:19:57.405009  193878 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 18:19:57.427804  193878 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 18:19:57.427891  193878 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 18:19:57.449447  193878 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 18:19:57.449526  193878 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 18:19:57.469999  193878 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 18:19:57.470078  193878 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 18:19:57.488719  193878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 18:20:01.555469  193878 node_ready.go:49] node "old-k8s-version-918475" is "Ready"
	I1018 18:20:01.555500  193878 node_ready.go:38] duration metric: took 4.527427734s for node "old-k8s-version-918475" to be "Ready" ...
	I1018 18:20:01.555517  193878 api_server.go:52] waiting for apiserver process to appear ...
	I1018 18:20:01.555581  193878 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 18:20:03.104884  193878 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.971513171s)
	I1018 18:20:03.105885  193878 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.064054666s)
	I1018 18:20:03.658847  193878 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.170041963s)
	I1018 18:20:03.658994  193878 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.103403295s)
	I1018 18:20:03.659015  193878 api_server.go:72] duration metric: took 7.015857251s to wait for apiserver process to appear ...
	I1018 18:20:03.659022  193878 api_server.go:88] waiting for apiserver healthz status ...
	I1018 18:20:03.659044  193878 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 18:20:03.661815  193878 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-918475 addons enable metrics-server
	
	I1018 18:20:03.664763  193878 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1018 18:20:03.667766  193878 addons.go:514] duration metric: took 7.024282434s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1018 18:20:03.673222  193878 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 18:20:03.674672  193878 api_server.go:141] control plane version: v1.28.0
	I1018 18:20:03.674697  193878 api_server.go:131] duration metric: took 15.665061ms to wait for apiserver health ...
	I1018 18:20:03.674706  193878 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 18:20:03.683184  193878 system_pods.go:59] 8 kube-system pods found
	I1018 18:20:03.683224  193878 system_pods.go:61] "coredns-5dd5756b68-kd9bz" [db934def-c206-49f5-93c1-5e9e72029aea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:20:03.683233  193878 system_pods.go:61] "etcd-old-k8s-version-918475" [52e60769-ce25-4039-9816-8eee5939547b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 18:20:03.683239  193878 system_pods.go:61] "kindnet-l8wgz" [1ce8f8fe-9578-4405-b71b-8dbb34c91ff8] Running
	I1018 18:20:03.683246  193878 system_pods.go:61] "kube-apiserver-old-k8s-version-918475" [bb13f0ff-7082-4594-b7a9-082fae97e8b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 18:20:03.683254  193878 system_pods.go:61] "kube-controller-manager-old-k8s-version-918475" [11c22b96-b426-4049-b453-30869431916f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 18:20:03.683264  193878 system_pods.go:61] "kube-proxy-776dm" [8dc0388f-47c7-46e9-9f05-4815ce812559] Running
	I1018 18:20:03.683272  193878 system_pods.go:61] "kube-scheduler-old-k8s-version-918475" [b2f9fdec-0d90-4575-a638-f9ed0457ae29] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 18:20:03.683280  193878 system_pods.go:61] "storage-provisioner" [486aafde-9949-4760-8b48-d58682b50726] Running
	I1018 18:20:03.683286  193878 system_pods.go:74] duration metric: took 8.574526ms to wait for pod list to return data ...
	I1018 18:20:03.683294  193878 default_sa.go:34] waiting for default service account to be created ...
	I1018 18:20:03.688704  193878 default_sa.go:45] found service account: "default"
	I1018 18:20:03.688733  193878 default_sa.go:55] duration metric: took 5.430871ms for default service account to be created ...
	I1018 18:20:03.688750  193878 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 18:20:03.694442  193878 system_pods.go:86] 8 kube-system pods found
	I1018 18:20:03.694474  193878 system_pods.go:89] "coredns-5dd5756b68-kd9bz" [db934def-c206-49f5-93c1-5e9e72029aea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:20:03.694485  193878 system_pods.go:89] "etcd-old-k8s-version-918475" [52e60769-ce25-4039-9816-8eee5939547b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 18:20:03.694491  193878 system_pods.go:89] "kindnet-l8wgz" [1ce8f8fe-9578-4405-b71b-8dbb34c91ff8] Running
	I1018 18:20:03.694499  193878 system_pods.go:89] "kube-apiserver-old-k8s-version-918475" [bb13f0ff-7082-4594-b7a9-082fae97e8b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 18:20:03.694505  193878 system_pods.go:89] "kube-controller-manager-old-k8s-version-918475" [11c22b96-b426-4049-b453-30869431916f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 18:20:03.694511  193878 system_pods.go:89] "kube-proxy-776dm" [8dc0388f-47c7-46e9-9f05-4815ce812559] Running
	I1018 18:20:03.694517  193878 system_pods.go:89] "kube-scheduler-old-k8s-version-918475" [b2f9fdec-0d90-4575-a638-f9ed0457ae29] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 18:20:03.694521  193878 system_pods.go:89] "storage-provisioner" [486aafde-9949-4760-8b48-d58682b50726] Running
	I1018 18:20:03.694528  193878 system_pods.go:126] duration metric: took 5.773079ms to wait for k8s-apps to be running ...
	I1018 18:20:03.694536  193878 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 18:20:03.694590  193878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:20:03.711834  193878 system_svc.go:56] duration metric: took 17.289952ms WaitForService to wait for kubelet
	I1018 18:20:03.711908  193878 kubeadm.go:586] duration metric: took 7.068747468s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:20:03.711942  193878 node_conditions.go:102] verifying NodePressure condition ...
	I1018 18:20:03.729105  193878 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 18:20:03.729179  193878 node_conditions.go:123] node cpu capacity is 2
	I1018 18:20:03.729206  193878 node_conditions.go:105] duration metric: took 17.247055ms to run NodePressure ...
	I1018 18:20:03.729230  193878 start.go:241] waiting for startup goroutines ...
	I1018 18:20:03.729264  193878 start.go:246] waiting for cluster config update ...
	I1018 18:20:03.729293  193878 start.go:255] writing updated cluster config ...
	I1018 18:20:03.729613  193878 ssh_runner.go:195] Run: rm -f paused
	I1018 18:20:03.733633  193878 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:20:03.743608  193878 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-kd9bz" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 18:20:05.750222  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:08.250213  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:10.750092  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:13.251980  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:15.751520  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:18.250317  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:20.250362  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:22.251690  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:24.751819  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:27.250286  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:29.250605  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:31.750601  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:34.251642  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	W1018 18:20:36.749912  193878 pod_ready.go:104] pod "coredns-5dd5756b68-kd9bz" is not "Ready", error: <nil>
	I1018 18:20:38.754922  193878 pod_ready.go:94] pod "coredns-5dd5756b68-kd9bz" is "Ready"
	I1018 18:20:38.754950  193878 pod_ready.go:86] duration metric: took 35.011277058s for pod "coredns-5dd5756b68-kd9bz" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:20:38.757906  193878 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-918475" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:20:38.763879  193878 pod_ready.go:94] pod "etcd-old-k8s-version-918475" is "Ready"
	I1018 18:20:38.763902  193878 pod_ready.go:86] duration metric: took 5.967887ms for pod "etcd-old-k8s-version-918475" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:20:38.766988  193878 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-918475" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:20:38.771279  193878 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-918475" is "Ready"
	I1018 18:20:38.771306  193878 pod_ready.go:86] duration metric: took 4.292297ms for pod "kube-apiserver-old-k8s-version-918475" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:20:38.774366  193878 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-918475" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:20:38.950029  193878 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-918475" is "Ready"
	I1018 18:20:38.950056  193878 pod_ready.go:86] duration metric: took 175.66748ms for pod "kube-controller-manager-old-k8s-version-918475" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:20:39.148765  193878 pod_ready.go:83] waiting for pod "kube-proxy-776dm" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:20:39.548126  193878 pod_ready.go:94] pod "kube-proxy-776dm" is "Ready"
	I1018 18:20:39.548153  193878 pod_ready.go:86] duration metric: took 399.361937ms for pod "kube-proxy-776dm" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:20:39.749071  193878 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-918475" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:20:40.148863  193878 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-918475" is "Ready"
	I1018 18:20:40.148965  193878 pod_ready.go:86] duration metric: took 399.86491ms for pod "kube-scheduler-old-k8s-version-918475" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:20:40.149009  193878 pod_ready.go:40] duration metric: took 36.415287754s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:20:40.205067  193878 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1018 18:20:40.208267  193878 out.go:203] 
	W1018 18:20:40.211175  193878 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1018 18:20:40.214032  193878 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1018 18:20:40.217005  193878 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-918475" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 18:20:36 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:36.808812303Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:20:36 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:36.821436062Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:20:36 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:36.822016049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:20:36 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:36.840869352Z" level=info msg="Created container 3392b5258cdf8bfa7e27174e4bb9b951b93b4fad5ce1d4d1606afad1f4c6d3da: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-rmgzd/dashboard-metrics-scraper" id=a67e3514-57d1-4c7a-95a0-6b856139ab70 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:20:36 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:36.843705794Z" level=info msg="Starting container: 3392b5258cdf8bfa7e27174e4bb9b951b93b4fad5ce1d4d1606afad1f4c6d3da" id=eff3aee9-c10d-4e98-bed6-463b22df75a2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:20:36 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:36.845731584Z" level=info msg="Started container" PID=1647 containerID=3392b5258cdf8bfa7e27174e4bb9b951b93b4fad5ce1d4d1606afad1f4c6d3da description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-rmgzd/dashboard-metrics-scraper id=eff3aee9-c10d-4e98-bed6-463b22df75a2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1aaaff73ac079cfbf412f43d942a472f345bb79dd1b0f6895cb2acebb44ff4d4
	Oct 18 18:20:36 old-k8s-version-918475 conmon[1645]: conmon 3392b5258cdf8bfa7e27 <ninfo>: container 1647 exited with status 1
	Oct 18 18:20:37 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:37.082909937Z" level=info msg="Removing container: 5fb89de9a0c8095a171816441d5e52f60c24997b7dcf1e941e6b0ce02c938c11" id=76eb1692-dbe6-4e7a-89ac-de1ecb9c534d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 18:20:37 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:37.095005928Z" level=info msg="Error loading conmon cgroup of container 5fb89de9a0c8095a171816441d5e52f60c24997b7dcf1e941e6b0ce02c938c11: cgroup deleted" id=76eb1692-dbe6-4e7a-89ac-de1ecb9c534d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 18:20:37 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:37.099526904Z" level=info msg="Removed container 5fb89de9a0c8095a171816441d5e52f60c24997b7dcf1e941e6b0ce02c938c11: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-rmgzd/dashboard-metrics-scraper" id=76eb1692-dbe6-4e7a-89ac-de1ecb9c534d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.91157552Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.917938596Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.91798489Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.918010818Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.921408597Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.921449303Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.921473443Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.924590587Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.924628864Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.924653529Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.927899732Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.927936746Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.927954995Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.931087662Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:20:42 old-k8s-version-918475 crio[649]: time="2025-10-18T18:20:42.931122215Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	3392b5258cdf8       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   1aaaff73ac079       dashboard-metrics-scraper-5f989dc9cf-rmgzd       kubernetes-dashboard
	cd3f22f6532cd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   38e748c788982       storage-provisioner                              kube-system
	d89d2a46547ff       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   33 seconds ago       Running             kubernetes-dashboard        0                   4d66519dabe7d       kubernetes-dashboard-8694d4445c-4dr8k            kubernetes-dashboard
	37f6234e2a329       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           54 seconds ago       Running             coredns                     1                   ea34ba138c5ce       coredns-5dd5756b68-kd9bz                         kube-system
	343e3671d8d97       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   7e3e15c36d5f6       busybox                                          default
	6b83bfedf7c37       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           55 seconds ago       Running             kube-proxy                  1                   93ffc315e4694       kube-proxy-776dm                                 kube-system
	235f17d855192       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   38e748c788982       storage-provisioner                              kube-system
	788c1c65cd78f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   df2368ce940f6       kindnet-l8wgz                                    kube-system
	ae1eebdab3cf7       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   ccac66ee79706       kube-scheduler-old-k8s-version-918475            kube-system
	c25897e752da3       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   b1381bfc7def3       etcd-old-k8s-version-918475                      kube-system
	4c59d57fbe375       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   d37a45450967d       kube-controller-manager-old-k8s-version-918475   kube-system
	ccff0d24759b8       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   bfa6e8f6b0cd7       kube-apiserver-old-k8s-version-918475            kube-system
	
	
	==> coredns [37f6234e2a329ee90bd5cd471d64c651488562dd21c5b2e64d113322e20e47fe] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49160 - 29310 "HINFO IN 7959786435207860516.7452248654979187562. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022475749s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-918475
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-918475
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=old-k8s-version-918475
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T18_18_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 18:18:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-918475
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 18:20:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 18:20:32 +0000   Sat, 18 Oct 2025 18:18:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 18:20:32 +0000   Sat, 18 Oct 2025 18:18:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 18:20:32 +0000   Sat, 18 Oct 2025 18:18:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 18:20:32 +0000   Sat, 18 Oct 2025 18:19:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-918475
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                f524e506-54d4-439d-bba8-8edfc5d97a5b
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-5dd5756b68-kd9bz                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-old-k8s-version-918475                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m3s
	  kube-system                 kindnet-l8wgz                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-old-k8s-version-918475             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-old-k8s-version-918475    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-776dm                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-old-k8s-version-918475             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-rmgzd        0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-4dr8k             0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 109s                   kube-proxy       
	  Normal  Starting                 54s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-918475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-918475 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-918475 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m3s                   kubelet          Node old-k8s-version-918475 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m3s                   kubelet          Node old-k8s-version-918475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s                   kubelet          Node old-k8s-version-918475 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m3s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s                   node-controller  Node old-k8s-version-918475 event: Registered Node old-k8s-version-918475 in Controller
	  Normal  NodeReady                97s                    kubelet          Node old-k8s-version-918475 status is now: NodeReady
	  Normal  Starting                 62s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node old-k8s-version-918475 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node old-k8s-version-918475 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)      kubelet          Node old-k8s-version-918475 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           43s                    node-controller  Node old-k8s-version-918475 event: Registered Node old-k8s-version-918475 in Controller
	
	
	==> dmesg <==
	[Oct18 17:53] overlayfs: idmapped layers are currently not supported
	[Oct18 17:58] overlayfs: idmapped layers are currently not supported
	[ +33.320958] overlayfs: idmapped layers are currently not supported
	[Oct18 18:00] overlayfs: idmapped layers are currently not supported
	[Oct18 18:01] overlayfs: idmapped layers are currently not supported
	[Oct18 18:02] overlayfs: idmapped layers are currently not supported
	[Oct18 18:04] overlayfs: idmapped layers are currently not supported
	[ +24.403909] overlayfs: idmapped layers are currently not supported
	[  +6.162774] overlayfs: idmapped layers are currently not supported
	[Oct18 18:05] overlayfs: idmapped layers are currently not supported
	[ +25.128760] overlayfs: idmapped layers are currently not supported
	[Oct18 18:06] overlayfs: idmapped layers are currently not supported
	[Oct18 18:07] overlayfs: idmapped layers are currently not supported
	[Oct18 18:08] overlayfs: idmapped layers are currently not supported
	[Oct18 18:09] overlayfs: idmapped layers are currently not supported
	[Oct18 18:11] overlayfs: idmapped layers are currently not supported
	[Oct18 18:13] overlayfs: idmapped layers are currently not supported
	[ +30.969240] overlayfs: idmapped layers are currently not supported
	[Oct18 18:15] overlayfs: idmapped layers are currently not supported
	[Oct18 18:16] overlayfs: idmapped layers are currently not supported
	[Oct18 18:17] overlayfs: idmapped layers are currently not supported
	[ +23.167826] overlayfs: idmapped layers are currently not supported
	[Oct18 18:18] overlayfs: idmapped layers are currently not supported
	[ +38.509809] overlayfs: idmapped layers are currently not supported
	[Oct18 18:19] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c25897e752da3dc12951f02ee89f3ec475fb055cb90f27cb1ccc4cde1fc8c6de] <==
	{"level":"info","ts":"2025-10-18T18:19:56.916293Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-18T18:19:56.916317Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T18:19:56.916491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-18T18:19:56.916539Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-18T18:19:56.916615Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T18:19:56.916652Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T18:19:56.916764Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T18:19:56.91681Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T18:19:56.91682Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T18:19:56.921257Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T18:19:56.921285Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T18:19:57.976818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-18T18:19:57.976968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-18T18:19:57.977037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-18T18:19:57.977076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-18T18:19:57.977106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-18T18:19:57.97714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-18T18:19:57.97717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-18T18:19:57.981282Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T18:19:57.982345Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-18T18:19:57.982731Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T18:19:57.983578Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-18T18:19:57.98125Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-918475 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T18:19:58.014609Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T18:19:58.014705Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:20:57 up  2:03,  0 user,  load average: 2.07, 2.98, 2.69
	Linux old-k8s-version-918475 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [788c1c65cd78f7ed26e15732fc3c949da8652d1331bb7a89fd1b2fa40c67386f] <==
	I1018 18:20:02.717572       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 18:20:02.717813       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 18:20:02.717945       1 main.go:148] setting mtu 1500 for CNI 
	I1018 18:20:02.717957       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 18:20:02.717967       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T18:20:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 18:20:02.908824       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 18:20:02.908842       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 18:20:02.908851       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 18:20:02.909187       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 18:20:32.909488       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 18:20:32.909495       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 18:20:32.909635       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 18:20:32.909612       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 18:20:34.409293       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 18:20:34.409327       1 metrics.go:72] Registering metrics
	I1018 18:20:34.409404       1 controller.go:711] "Syncing nftables rules"
	I1018 18:20:42.911240       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 18:20:42.911280       1 main.go:301] handling current node
	I1018 18:20:52.914576       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 18:20:52.914611       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ccff0d24759b80bcd65a9894c590246579f8cca877d5c90bf983408a0b729bb9] <==
	I1018 18:20:01.197833       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 18:20:01.491860       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1018 18:20:01.582170       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1018 18:20:01.587159       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 18:20:01.598352       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1018 18:20:01.598386       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1018 18:20:01.598372       1 shared_informer.go:318] Caches are synced for configmaps
	I1018 18:20:01.598518       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 18:20:01.598622       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1018 18:20:01.599507       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1018 18:20:01.599646       1 aggregator.go:166] initial CRD sync complete...
	I1018 18:20:01.599686       1 autoregister_controller.go:141] Starting autoregister controller
	I1018 18:20:01.599715       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 18:20:01.599741       1 cache.go:39] Caches are synced for autoregister controller
	I1018 18:20:02.200789       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 18:20:03.483494       1 controller.go:624] quota admission added evaluator for: namespaces
	I1018 18:20:03.528994       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1018 18:20:03.557112       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 18:20:03.567920       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 18:20:03.580660       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1018 18:20:03.634475       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.179.20"}
	I1018 18:20:03.651763       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.104.172"}
	I1018 18:20:14.863060       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1018 18:20:14.930013       1 controller.go:624] quota admission added evaluator for: endpoints
	I1018 18:20:14.980645       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4c59d57fbe375bf1c3be0746a92ca4d85fdf6a06d96f6437faeb7e9324c89a9b] <==
	I1018 18:20:14.878922       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1018 18:20:14.893389       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-4dr8k"
	I1018 18:20:14.894734       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-rmgzd"
	I1018 18:20:14.922641       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="44.262767ms"
	I1018 18:20:14.924370       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.605316ms"
	I1018 18:20:14.947560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="24.778244ms"
	I1018 18:20:14.947907       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="157.401µs"
	I1018 18:20:14.975443       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.033586ms"
	I1018 18:20:14.992019       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="132.712µs"
	I1018 18:20:14.997375       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1018 18:20:14.997851       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1018 18:20:15.004347       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 18:20:15.004621       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="29.046935ms"
	I1018 18:20:15.004751       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.524µs"
	I1018 18:20:15.040599       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 18:20:15.040696       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1018 18:20:20.037190       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="301.575µs"
	I1018 18:20:21.042594       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.471µs"
	I1018 18:20:22.056729       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="101.055µs"
	I1018 18:20:25.076407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.698516ms"
	I1018 18:20:25.077877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="48.871µs"
	I1018 18:20:37.101004       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="63.352µs"
	I1018 18:20:38.296721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.328668ms"
	I1018 18:20:38.297009       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.768µs"
	I1018 18:20:45.276299       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.624µs"
	
	
	==> kube-proxy [6b83bfedf7c37fc9bf7f3d03db7cee37209be54754656efb059c09c8f2eb2ceb] <==
	I1018 18:20:02.822903       1 server_others.go:69] "Using iptables proxy"
	I1018 18:20:02.845714       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1018 18:20:02.903775       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 18:20:02.911518       1 server_others.go:152] "Using iptables Proxier"
	I1018 18:20:02.911555       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1018 18:20:02.911564       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1018 18:20:02.911595       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1018 18:20:02.911796       1 server.go:846] "Version info" version="v1.28.0"
	I1018 18:20:02.911806       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:20:02.913064       1 config.go:188] "Starting service config controller"
	I1018 18:20:02.913076       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1018 18:20:02.913097       1 config.go:97] "Starting endpoint slice config controller"
	I1018 18:20:02.913101       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1018 18:20:02.921034       1 config.go:315] "Starting node config controller"
	I1018 18:20:02.921056       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1018 18:20:03.024869       1 shared_informer.go:318] Caches are synced for node config
	I1018 18:20:03.024915       1 shared_informer.go:318] Caches are synced for service config
	I1018 18:20:03.025009       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ae1eebdab3cf71a07bd4eae6a705ba7ff86c020ba58671cfcc9759010c46c239] <==
	W1018 18:20:01.490060       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1018 18:20:01.490085       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1018 18:20:01.490181       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1018 18:20:01.490199       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1018 18:20:01.490236       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1018 18:20:01.490297       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1018 18:20:01.490253       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1018 18:20:01.490379       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1018 18:20:01.490360       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1018 18:20:01.490454       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1018 18:20:01.490431       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1018 18:20:01.490515       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1018 18:20:01.490546       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1018 18:20:01.490525       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1018 18:20:01.490589       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1018 18:20:01.490620       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1018 18:20:01.490664       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1018 18:20:01.490680       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1018 18:20:01.490747       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1018 18:20:01.490791       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1018 18:20:01.490755       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1018 18:20:01.490861       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1018 18:20:01.490825       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1018 18:20:01.490920       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1018 18:20:03.142143       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 18:20:15 old-k8s-version-918475 kubelet[776]: I1018 18:20:15.068979     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmgfr\" (UniqueName: \"kubernetes.io/projected/7a344bb7-dbef-407e-a17b-95ee3212304e-kube-api-access-xmgfr\") pod \"kubernetes-dashboard-8694d4445c-4dr8k\" (UID: \"7a344bb7-dbef-407e-a17b-95ee3212304e\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4dr8k"
	Oct 18 18:20:15 old-k8s-version-918475 kubelet[776]: I1018 18:20:15.069159     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/bacb32f2-b820-4279-a362-120e4c43e038-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-rmgzd\" (UID: \"bacb32f2-b820-4279-a362-120e4c43e038\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-rmgzd"
	Oct 18 18:20:15 old-k8s-version-918475 kubelet[776]: I1018 18:20:15.069221     776 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2vtn\" (UniqueName: \"kubernetes.io/projected/bacb32f2-b820-4279-a362-120e4c43e038-kube-api-access-v2vtn\") pod \"dashboard-metrics-scraper-5f989dc9cf-rmgzd\" (UID: \"bacb32f2-b820-4279-a362-120e4c43e038\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-rmgzd"
	Oct 18 18:20:15 old-k8s-version-918475 kubelet[776]: W1018 18:20:15.255157     776 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6/crio-1aaaff73ac079cfbf412f43d942a472f345bb79dd1b0f6895cb2acebb44ff4d4 WatchSource:0}: Error finding container 1aaaff73ac079cfbf412f43d942a472f345bb79dd1b0f6895cb2acebb44ff4d4: Status 404 returned error can't find the container with id 1aaaff73ac079cfbf412f43d942a472f345bb79dd1b0f6895cb2acebb44ff4d4
	Oct 18 18:20:15 old-k8s-version-918475 kubelet[776]: W1018 18:20:15.260153     776 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/13ab62783a421f101660de74d2bec3818ff41a6620bfd3ec135d6adb2e8c1df6/crio-4d66519dabe7d4e152ca7dec224dd41115e4b2b93a1c9d52e43736a63b94135a WatchSource:0}: Error finding container 4d66519dabe7d4e152ca7dec224dd41115e4b2b93a1c9d52e43736a63b94135a: Status 404 returned error can't find the container with id 4d66519dabe7d4e152ca7dec224dd41115e4b2b93a1c9d52e43736a63b94135a
	Oct 18 18:20:20 old-k8s-version-918475 kubelet[776]: I1018 18:20:20.017760     776 scope.go:117] "RemoveContainer" containerID="79957774176cb644567f0a3c2ddaf95eace2bc647525f5d2d1babc61bd7ec58d"
	Oct 18 18:20:21 old-k8s-version-918475 kubelet[776]: I1018 18:20:21.023004     776 scope.go:117] "RemoveContainer" containerID="79957774176cb644567f0a3c2ddaf95eace2bc647525f5d2d1babc61bd7ec58d"
	Oct 18 18:20:21 old-k8s-version-918475 kubelet[776]: I1018 18:20:21.023293     776 scope.go:117] "RemoveContainer" containerID="5fb89de9a0c8095a171816441d5e52f60c24997b7dcf1e941e6b0ce02c938c11"
	Oct 18 18:20:21 old-k8s-version-918475 kubelet[776]: E1018 18:20:21.026524     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-rmgzd_kubernetes-dashboard(bacb32f2-b820-4279-a362-120e4c43e038)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-rmgzd" podUID="bacb32f2-b820-4279-a362-120e4c43e038"
	Oct 18 18:20:22 old-k8s-version-918475 kubelet[776]: I1018 18:20:22.031719     776 scope.go:117] "RemoveContainer" containerID="5fb89de9a0c8095a171816441d5e52f60c24997b7dcf1e941e6b0ce02c938c11"
	Oct 18 18:20:22 old-k8s-version-918475 kubelet[776]: E1018 18:20:22.032117     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-rmgzd_kubernetes-dashboard(bacb32f2-b820-4279-a362-120e4c43e038)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-rmgzd" podUID="bacb32f2-b820-4279-a362-120e4c43e038"
	Oct 18 18:20:25 old-k8s-version-918475 kubelet[776]: I1018 18:20:25.220915     776 scope.go:117] "RemoveContainer" containerID="5fb89de9a0c8095a171816441d5e52f60c24997b7dcf1e941e6b0ce02c938c11"
	Oct 18 18:20:25 old-k8s-version-918475 kubelet[776]: E1018 18:20:25.221284     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-rmgzd_kubernetes-dashboard(bacb32f2-b820-4279-a362-120e4c43e038)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-rmgzd" podUID="bacb32f2-b820-4279-a362-120e4c43e038"
	Oct 18 18:20:33 old-k8s-version-918475 kubelet[776]: I1018 18:20:33.067562     776 scope.go:117] "RemoveContainer" containerID="235f17d855192310ccb1a489b3d0c7f7ebbad52420790a930e349f341c3e8d8f"
	Oct 18 18:20:33 old-k8s-version-918475 kubelet[776]: I1018 18:20:33.091608     776 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4dr8k" podStartSLOduration=10.325332492 podCreationTimestamp="2025-10-18 18:20:14 +0000 UTC" firstStartedPulling="2025-10-18 18:20:15.266001372 +0000 UTC m=+19.602675859" lastFinishedPulling="2025-10-18 18:20:24.032214434 +0000 UTC m=+28.368888920" observedRunningTime="2025-10-18 18:20:25.062536464 +0000 UTC m=+29.399210959" watchObservedRunningTime="2025-10-18 18:20:33.091545553 +0000 UTC m=+37.428220048"
	Oct 18 18:20:36 old-k8s-version-918475 kubelet[776]: I1018 18:20:36.805621     776 scope.go:117] "RemoveContainer" containerID="5fb89de9a0c8095a171816441d5e52f60c24997b7dcf1e941e6b0ce02c938c11"
	Oct 18 18:20:37 old-k8s-version-918475 kubelet[776]: I1018 18:20:37.080553     776 scope.go:117] "RemoveContainer" containerID="5fb89de9a0c8095a171816441d5e52f60c24997b7dcf1e941e6b0ce02c938c11"
	Oct 18 18:20:37 old-k8s-version-918475 kubelet[776]: I1018 18:20:37.080855     776 scope.go:117] "RemoveContainer" containerID="3392b5258cdf8bfa7e27174e4bb9b951b93b4fad5ce1d4d1606afad1f4c6d3da"
	Oct 18 18:20:37 old-k8s-version-918475 kubelet[776]: E1018 18:20:37.081178     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-rmgzd_kubernetes-dashboard(bacb32f2-b820-4279-a362-120e4c43e038)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-rmgzd" podUID="bacb32f2-b820-4279-a362-120e4c43e038"
	Oct 18 18:20:45 old-k8s-version-918475 kubelet[776]: I1018 18:20:45.237713     776 scope.go:117] "RemoveContainer" containerID="3392b5258cdf8bfa7e27174e4bb9b951b93b4fad5ce1d4d1606afad1f4c6d3da"
	Oct 18 18:20:45 old-k8s-version-918475 kubelet[776]: E1018 18:20:45.238661     776 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-rmgzd_kubernetes-dashboard(bacb32f2-b820-4279-a362-120e4c43e038)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-rmgzd" podUID="bacb32f2-b820-4279-a362-120e4c43e038"
	Oct 18 18:20:52 old-k8s-version-918475 kubelet[776]: I1018 18:20:52.540118     776 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 18 18:20:52 old-k8s-version-918475 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 18:20:52 old-k8s-version-918475 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 18:20:52 old-k8s-version-918475 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [d89d2a46547ff61a18edd8b04922f7e9a116c1917c95fdc49f17527d20b9e15e] <==
	2025/10/18 18:20:24 Using namespace: kubernetes-dashboard
	2025/10/18 18:20:24 Using in-cluster config to connect to apiserver
	2025/10/18 18:20:24 Using secret token for csrf signing
	2025/10/18 18:20:24 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 18:20:24 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 18:20:24 Successful initial request to the apiserver, version: v1.28.0
	2025/10/18 18:20:24 Generating JWE encryption key
	2025/10/18 18:20:24 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 18:20:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 18:20:24 Initializing JWE encryption key from synchronized object
	2025/10/18 18:20:24 Creating in-cluster Sidecar client
	2025/10/18 18:20:24 Serving insecurely on HTTP port: 9090
	2025/10/18 18:20:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 18:20:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 18:20:24 Starting overwatch
	
	
	==> storage-provisioner [235f17d855192310ccb1a489b3d0c7f7ebbad52420790a930e349f341c3e8d8f] <==
	I1018 18:20:02.671974       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 18:20:32.674698       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [cd3f22f6532cd26d3f39cbbc7521c9d8a7a712934dd47081a9ad7584afc64c38] <==
	I1018 18:20:33.116868       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 18:20:33.130581       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 18:20:33.130627       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1018 18:20:50.530883       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 18:20:50.531460       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"50769fb3-6713-46ea-856e-a4e705d84615", APIVersion:"v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-918475_1e70752d-57bb-422f-bb0a-538fa8bcc735 became leader
	I1018 18:20:50.531725       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-918475_1e70752d-57bb-422f-bb0a-538fa8bcc735!
	I1018 18:20:50.632650       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-918475_1e70752d-57bb-422f-bb0a-538fa8bcc735!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-918475 -n old-k8s-version-918475
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-918475 -n old-k8s-version-918475: exit status 2 (394.576936ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-918475 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-192562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-192562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (261.709642ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:22:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-192562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-192562 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-192562 describe deploy/metrics-server -n kube-system: exit status 1 (79.928909ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-192562 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-192562
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-192562:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c0a8933c552c9d4e5fb4ca01ca33c573463079ebfb6960b8ac96dc752d5faeaa",
	        "Created": "2025-10-18T18:21:07.306681967Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 198097,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T18:21:07.386926241Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/c0a8933c552c9d4e5fb4ca01ca33c573463079ebfb6960b8ac96dc752d5faeaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c0a8933c552c9d4e5fb4ca01ca33c573463079ebfb6960b8ac96dc752d5faeaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/c0a8933c552c9d4e5fb4ca01ca33c573463079ebfb6960b8ac96dc752d5faeaa/hosts",
	        "LogPath": "/var/lib/docker/containers/c0a8933c552c9d4e5fb4ca01ca33c573463079ebfb6960b8ac96dc752d5faeaa/c0a8933c552c9d4e5fb4ca01ca33c573463079ebfb6960b8ac96dc752d5faeaa-json.log",
	        "Name": "/default-k8s-diff-port-192562",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-192562:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-192562",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c0a8933c552c9d4e5fb4ca01ca33c573463079ebfb6960b8ac96dc752d5faeaa",
	                "LowerDir": "/var/lib/docker/overlay2/dee070e079682e34299d25230ff60b4454bdeead13a662fbf9dd6a74e43397c1-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dee070e079682e34299d25230ff60b4454bdeead13a662fbf9dd6a74e43397c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dee070e079682e34299d25230ff60b4454bdeead13a662fbf9dd6a74e43397c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dee070e079682e34299d25230ff60b4454bdeead13a662fbf9dd6a74e43397c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-192562",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-192562/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-192562",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-192562",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-192562",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "88e9049b886c1e6c1ef79524c9c6a1a13780164c74d46c8a12432e6c14441343",
	            "SandboxKey": "/var/run/docker/netns/88e9049b886c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-192562": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:49:92:4f:9a:52",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "38c20734cd0994956410457c1029d2a36f99d2c176924ac552fc426e5efdac60",
	                    "EndpointID": "f2a2bf4d13bd02a342e73a8f2c33ab5bd44fe9746eae86a436316b467d4c2838",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-192562",
	                        "c0a8933c552c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-192562 -n default-k8s-diff-port-192562
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-192562 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-192562 logs -n 25: (1.323603689s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-111074 sudo crio config                                                                                                                                                                                                             │ cilium-111074                │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │                     │
	│ delete  │ -p cilium-111074                                                                                                                                                                                                                              │ cilium-111074                │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │ 18 Oct 25 18:16 UTC │
	│ start   │ -p force-systemd-env-785999 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-785999     │ jenkins │ v1.37.0 │ 18 Oct 25 18:16 UTC │ 18 Oct 25 18:17 UTC │
	│ pause   │ -p pause-321903 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-321903                 │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │                     │
	│ delete  │ -p pause-321903                                                                                                                                                                                                                               │ pause-321903                 │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │ 18 Oct 25 18:17 UTC │
	│ start   │ -p cert-expiration-463770 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-463770       │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │ 18 Oct 25 18:18 UTC │
	│ delete  │ -p force-systemd-env-785999                                                                                                                                                                                                                   │ force-systemd-env-785999     │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │ 18 Oct 25 18:17 UTC │
	│ start   │ -p cert-options-327418 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-327418          │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │ 18 Oct 25 18:18 UTC │
	│ ssh     │ cert-options-327418 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-327418          │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:18 UTC │
	│ ssh     │ -p cert-options-327418 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-327418          │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:18 UTC │
	│ delete  │ -p cert-options-327418                                                                                                                                                                                                                        │ cert-options-327418          │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:18 UTC │
	│ start   │ -p old-k8s-version-918475 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:19 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-918475 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │                     │
	│ stop    │ -p old-k8s-version-918475 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │ 18 Oct 25 18:19 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-918475 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │ 18 Oct 25 18:19 UTC │
	│ start   │ -p old-k8s-version-918475 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │ 18 Oct 25 18:20 UTC │
	│ image   │ old-k8s-version-918475 image list --format=json                                                                                                                                                                                               │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:20 UTC │ 18 Oct 25 18:20 UTC │
	│ pause   │ -p old-k8s-version-918475 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:20 UTC │                     │
	│ delete  │ -p old-k8s-version-918475                                                                                                                                                                                                                     │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:20 UTC │ 18 Oct 25 18:21 UTC │
	│ start   │ -p cert-expiration-463770 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-463770       │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:21 UTC │
	│ delete  │ -p old-k8s-version-918475                                                                                                                                                                                                                     │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:21 UTC │
	│ start   │ -p default-k8s-diff-port-192562 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:22 UTC │
	│ delete  │ -p cert-expiration-463770                                                                                                                                                                                                                     │ cert-expiration-463770       │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:21 UTC │
	│ start   │ -p embed-certs-213943 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-192562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 18:21:34
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 18:21:34.896338  200913 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:21:34.896537  200913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:21:34.896564  200913 out.go:374] Setting ErrFile to fd 2...
	I1018 18:21:34.896584  200913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:21:34.896877  200913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:21:34.897380  200913 out.go:368] Setting JSON to false
	I1018 18:21:34.898327  200913 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7444,"bootTime":1760804251,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 18:21:34.898423  200913 start.go:141] virtualization:  
	I1018 18:21:34.902464  200913 out.go:179] * [embed-certs-213943] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 18:21:34.905709  200913 notify.go:220] Checking for updates...
	I1018 18:21:34.908743  200913 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 18:21:34.911688  200913 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 18:21:34.914530  200913 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:21:34.917461  200913 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 18:21:34.920268  200913 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 18:21:34.923056  200913 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 18:21:34.926459  200913 config.go:182] Loaded profile config "default-k8s-diff-port-192562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:21:34.926578  200913 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 18:21:34.976401  200913 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 18:21:34.976531  200913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:21:35.084087  200913 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 18:21:35.069211061 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:21:35.084203  200913 docker.go:318] overlay module found
	I1018 18:21:35.087399  200913 out.go:179] * Using the docker driver based on user configuration
	I1018 18:21:35.090238  200913 start.go:305] selected driver: docker
	I1018 18:21:35.090268  200913 start.go:925] validating driver "docker" against <nil>
	I1018 18:21:35.090284  200913 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 18:21:35.091021  200913 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:21:35.197295  200913 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 18:21:35.181032782 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:21:35.197442  200913 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 18:21:35.197684  200913 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:21:35.200686  200913 out.go:179] * Using Docker driver with root privileges
	I1018 18:21:35.203429  200913 cni.go:84] Creating CNI manager for ""
	I1018 18:21:35.203505  200913 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:21:35.203520  200913 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 18:21:35.203610  200913 start.go:349] cluster config:
	{Name:embed-certs-213943 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-213943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:21:35.206994  200913 out.go:179] * Starting "embed-certs-213943" primary control-plane node in "embed-certs-213943" cluster
	I1018 18:21:35.209686  200913 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 18:21:35.212547  200913 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 18:21:35.215588  200913 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:21:35.215651  200913 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 18:21:35.215666  200913 cache.go:58] Caching tarball of preloaded images
	I1018 18:21:35.215749  200913 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 18:21:35.215765  200913 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 18:21:35.215874  200913 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/config.json ...
	I1018 18:21:35.215901  200913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/config.json: {Name:mk8662109abce84a8f3fa5dfd87172fe015918d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:21:35.216061  200913 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 18:21:35.244746  200913 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 18:21:35.244768  200913 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 18:21:35.244782  200913 cache.go:232] Successfully downloaded all kic artifacts
	I1018 18:21:35.244808  200913 start.go:360] acquireMachinesLock for embed-certs-213943: {Name:mk6236f8122624f68835f4877bda621eb0a7ae61 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:21:35.244914  200913 start.go:364] duration metric: took 85.491µs to acquireMachinesLock for "embed-certs-213943"
	I1018 18:21:35.244962  200913 start.go:93] Provisioning new machine with config: &{Name:embed-certs-213943 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-213943 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 18:21:35.245035  200913 start.go:125] createHost starting for "" (driver="docker")
	I1018 18:21:34.174950  197575 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.107361816s
	I1018 18:21:36.472289  197575 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.405222645s
	I1018 18:21:38.571958  197575 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.504891355s
	I1018 18:21:38.595592  197575 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 18:21:38.617012  197575 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 18:21:38.644064  197575 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 18:21:38.644500  197575 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-192562 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 18:21:38.669135  197575 kubeadm.go:318] [bootstrap-token] Using token: y7xsdo.h8dy3uhqb5081sa9
	I1018 18:21:35.248417  200913 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 18:21:35.248669  200913 start.go:159] libmachine.API.Create for "embed-certs-213943" (driver="docker")
	I1018 18:21:35.248710  200913 client.go:168] LocalClient.Create starting
	I1018 18:21:35.248785  200913 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem
	I1018 18:21:35.248822  200913 main.go:141] libmachine: Decoding PEM data...
	I1018 18:21:35.248845  200913 main.go:141] libmachine: Parsing certificate...
	I1018 18:21:35.248899  200913 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem
	I1018 18:21:35.248926  200913 main.go:141] libmachine: Decoding PEM data...
	I1018 18:21:35.248957  200913 main.go:141] libmachine: Parsing certificate...
	I1018 18:21:35.249319  200913 cli_runner.go:164] Run: docker network inspect embed-certs-213943 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 18:21:35.275945  200913 cli_runner.go:211] docker network inspect embed-certs-213943 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 18:21:35.276028  200913 network_create.go:284] running [docker network inspect embed-certs-213943] to gather additional debugging logs...
	I1018 18:21:35.276043  200913 cli_runner.go:164] Run: docker network inspect embed-certs-213943
	W1018 18:21:35.293550  200913 cli_runner.go:211] docker network inspect embed-certs-213943 returned with exit code 1
	I1018 18:21:35.293576  200913 network_create.go:287] error running [docker network inspect embed-certs-213943]: docker network inspect embed-certs-213943: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-213943 not found
	I1018 18:21:35.293590  200913 network_create.go:289] output of [docker network inspect embed-certs-213943]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-213943 not found
	
	** /stderr **
	I1018 18:21:35.293694  200913 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:21:35.309906  200913 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-903568cdf824 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:7a:80:c0:8c:ed} reservation:<nil>}
	I1018 18:21:35.310250  200913 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ee9fcaab9ca8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:a7:65:1b:c0:41} reservation:<nil>}
	I1018 18:21:35.310740  200913 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-414fc11e154b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:86:f0:a8:1a:86:00} reservation:<nil>}
	I1018 18:21:35.310989  200913 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-38c20734cd09 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:36:24:9e:88:6b:ca} reservation:<nil>}
	I1018 18:21:35.311368  200913 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a426c0}
	I1018 18:21:35.311385  200913 network_create.go:124] attempt to create docker network embed-certs-213943 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 18:21:35.311437  200913 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-213943 embed-certs-213943
	I1018 18:21:35.392311  200913 network_create.go:108] docker network embed-certs-213943 192.168.85.0/24 created
	I1018 18:21:35.392340  200913 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-213943" container
	I1018 18:21:35.392410  200913 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 18:21:35.425217  200913 cli_runner.go:164] Run: docker volume create embed-certs-213943 --label name.minikube.sigs.k8s.io=embed-certs-213943 --label created_by.minikube.sigs.k8s.io=true
	I1018 18:21:35.453009  200913 oci.go:103] Successfully created a docker volume embed-certs-213943
	I1018 18:21:35.453101  200913 cli_runner.go:164] Run: docker run --rm --name embed-certs-213943-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-213943 --entrypoint /usr/bin/test -v embed-certs-213943:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 18:21:36.154853  200913 oci.go:107] Successfully prepared a docker volume embed-certs-213943
	I1018 18:21:36.154899  200913 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:21:36.154919  200913 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 18:21:36.154992  200913 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-213943:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 18:21:38.673071  197575 out.go:252]   - Configuring RBAC rules ...
	I1018 18:21:38.673204  197575 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 18:21:38.681637  197575 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 18:21:38.702645  197575 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 18:21:38.709169  197575 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 18:21:38.714306  197575 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 18:21:38.719638  197575 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 18:21:38.992496  197575 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 18:21:39.450875  197575 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 18:21:39.991958  197575 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 18:21:39.993576  197575 kubeadm.go:318] 
	I1018 18:21:39.993671  197575 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 18:21:39.993696  197575 kubeadm.go:318] 
	I1018 18:21:39.993785  197575 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 18:21:39.993795  197575 kubeadm.go:318] 
	I1018 18:21:39.993823  197575 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 18:21:39.993889  197575 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 18:21:39.993947  197575 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 18:21:39.993955  197575 kubeadm.go:318] 
	I1018 18:21:39.994011  197575 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 18:21:39.994019  197575 kubeadm.go:318] 
	I1018 18:21:39.994067  197575 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 18:21:39.994075  197575 kubeadm.go:318] 
	I1018 18:21:39.994135  197575 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 18:21:39.994216  197575 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 18:21:39.994291  197575 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 18:21:39.994299  197575 kubeadm.go:318] 
	I1018 18:21:39.994386  197575 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 18:21:39.994470  197575 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 18:21:39.994479  197575 kubeadm.go:318] 
	I1018 18:21:39.994568  197575 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token y7xsdo.h8dy3uhqb5081sa9 \
	I1018 18:21:39.994678  197575 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d0244c5bf86cdf97546c6a22045cb6ed9d7ead524d9c98d9ca35da77d5d7a04d \
	I1018 18:21:39.994707  197575 kubeadm.go:318] 	--control-plane 
	I1018 18:21:39.994712  197575 kubeadm.go:318] 
	I1018 18:21:39.994800  197575 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 18:21:39.994808  197575 kubeadm.go:318] 
	I1018 18:21:39.994893  197575 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token y7xsdo.h8dy3uhqb5081sa9 \
	I1018 18:21:39.995004  197575 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d0244c5bf86cdf97546c6a22045cb6ed9d7ead524d9c98d9ca35da77d5d7a04d 
	I1018 18:21:39.999272  197575 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 18:21:39.999524  197575 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 18:21:39.999648  197575 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 18:21:39.999675  197575 cni.go:84] Creating CNI manager for ""
	I1018 18:21:39.999683  197575 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:21:40.022161  197575 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 18:21:40.027017  197575 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 18:21:40.031658  197575 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 18:21:40.031684  197575 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 18:21:40.046447  197575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 18:21:40.483638  197575 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 18:21:40.483703  197575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:21:40.483785  197575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-192562 minikube.k8s.io/updated_at=2025_10_18T18_21_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404 minikube.k8s.io/name=default-k8s-diff-port-192562 minikube.k8s.io/primary=true
	I1018 18:21:40.674207  197575 ops.go:34] apiserver oom_adj: -16
	I1018 18:21:40.674307  197575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:21:41.174427  197575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:21:41.674713  197575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:21:42.174868  197575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:21:42.674953  197575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:21:43.174552  197575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:21:43.675406  197575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:21:44.175212  197575 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:21:44.287429  197575 kubeadm.go:1113] duration metric: took 3.803784135s to wait for elevateKubeSystemPrivileges
	I1018 18:21:44.287460  197575 kubeadm.go:402] duration metric: took 27.263405338s to StartCluster
	I1018 18:21:44.287477  197575 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:21:44.287538  197575 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:21:44.288229  197575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:21:44.288443  197575 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 18:21:44.288552  197575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 18:21:44.288816  197575 config.go:182] Loaded profile config "default-k8s-diff-port-192562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:21:44.288864  197575 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 18:21:44.288927  197575 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-192562"
	I1018 18:21:44.288970  197575 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-192562"
	I1018 18:21:44.288996  197575 host.go:66] Checking if "default-k8s-diff-port-192562" exists ...
	I1018 18:21:44.289483  197575 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-192562 --format={{.State.Status}}
	I1018 18:21:44.289823  197575 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-192562"
	I1018 18:21:44.289846  197575 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-192562"
	I1018 18:21:44.290104  197575 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-192562 --format={{.State.Status}}
	I1018 18:21:44.293214  197575 out.go:179] * Verifying Kubernetes components...
	I1018 18:21:44.296479  197575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:21:44.328420  197575 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-192562"
	I1018 18:21:44.328460  197575 host.go:66] Checking if "default-k8s-diff-port-192562" exists ...
	I1018 18:21:44.328909  197575 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-192562 --format={{.State.Status}}
	I1018 18:21:44.339078  197575 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:21:41.029607  200913 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-213943:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.874563326s)
	I1018 18:21:41.029640  200913 kic.go:203] duration metric: took 4.874717675s to extract preloaded images to volume ...
	W1018 18:21:41.029784  200913 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 18:21:41.029909  200913 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 18:21:41.095549  200913 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-213943 --name embed-certs-213943 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-213943 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-213943 --network embed-certs-213943 --ip 192.168.85.2 --volume embed-certs-213943:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 18:21:41.460287  200913 cli_runner.go:164] Run: docker container inspect embed-certs-213943 --format={{.State.Running}}
	I1018 18:21:41.482286  200913 cli_runner.go:164] Run: docker container inspect embed-certs-213943 --format={{.State.Status}}
	I1018 18:21:41.515197  200913 cli_runner.go:164] Run: docker exec embed-certs-213943 stat /var/lib/dpkg/alternatives/iptables
	I1018 18:21:41.585546  200913 oci.go:144] the created container "embed-certs-213943" has a running status.
	I1018 18:21:41.585589  200913 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa...
	I1018 18:21:42.406389  200913 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 18:21:42.438787  200913 cli_runner.go:164] Run: docker container inspect embed-certs-213943 --format={{.State.Status}}
	I1018 18:21:42.462329  200913 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 18:21:42.462354  200913 kic_runner.go:114] Args: [docker exec --privileged embed-certs-213943 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 18:21:42.509967  200913 cli_runner.go:164] Run: docker container inspect embed-certs-213943 --format={{.State.Status}}
	I1018 18:21:42.528111  200913 machine.go:93] provisionDockerMachine start ...
	I1018 18:21:42.528209  200913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:21:42.545098  200913 main.go:141] libmachine: Using SSH client type: native
	I1018 18:21:42.545451  200913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1018 18:21:42.545468  200913 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 18:21:42.546108  200913 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 18:21:44.342018  197575 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:21:44.342042  197575 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 18:21:44.342135  197575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-192562
	I1018 18:21:44.365183  197575 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 18:21:44.365206  197575 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 18:21:44.365277  197575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-192562
	I1018 18:21:44.394479  197575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/default-k8s-diff-port-192562/id_rsa Username:docker}
	I1018 18:21:44.405040  197575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/default-k8s-diff-port-192562/id_rsa Username:docker}
	I1018 18:21:44.622632  197575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 18:21:44.622790  197575 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:21:44.644209  197575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:21:44.752567  197575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 18:21:45.250408  197575 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-192562" to be "Ready" ...
	I1018 18:21:45.251902  197575 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1018 18:21:45.561430  197575 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 18:21:45.564289  197575 addons.go:514] duration metric: took 1.275405161s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 18:21:45.761151  197575 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-192562" context rescaled to 1 replicas
	I1018 18:21:45.700461  200913 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-213943
	
	I1018 18:21:45.700486  200913 ubuntu.go:182] provisioning hostname "embed-certs-213943"
	I1018 18:21:45.700547  200913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:21:45.725816  200913 main.go:141] libmachine: Using SSH client type: native
	I1018 18:21:45.726132  200913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1018 18:21:45.726145  200913 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-213943 && echo "embed-certs-213943" | sudo tee /etc/hostname
	I1018 18:21:45.903104  200913 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-213943
	
	I1018 18:21:45.903202  200913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:21:45.920794  200913 main.go:141] libmachine: Using SSH client type: native
	I1018 18:21:45.921127  200913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1018 18:21:45.921149  200913 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-213943' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-213943/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-213943' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 18:21:46.070337  200913 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 18:21:46.070407  200913 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 18:21:46.070440  200913 ubuntu.go:190] setting up certificates
	I1018 18:21:46.070480  200913 provision.go:84] configureAuth start
	I1018 18:21:46.070564  200913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-213943
	I1018 18:21:46.087782  200913 provision.go:143] copyHostCerts
	I1018 18:21:46.087852  200913 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 18:21:46.087862  200913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 18:21:46.087949  200913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 18:21:46.088051  200913 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 18:21:46.088057  200913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 18:21:46.088091  200913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 18:21:46.088151  200913 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 18:21:46.088156  200913 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 18:21:46.088180  200913 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 18:21:46.088241  200913 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.embed-certs-213943 san=[127.0.0.1 192.168.85.2 embed-certs-213943 localhost minikube]
	I1018 18:21:46.700638  200913 provision.go:177] copyRemoteCerts
	I1018 18:21:46.700705  200913 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 18:21:46.700749  200913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:21:46.717592  200913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa Username:docker}
	I1018 18:21:46.821686  200913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 18:21:46.841748  200913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 18:21:46.860411  200913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 18:21:46.879573  200913 provision.go:87] duration metric: took 809.060284ms to configureAuth
	I1018 18:21:46.879598  200913 ubuntu.go:206] setting minikube options for container-runtime
	I1018 18:21:46.879792  200913 config.go:182] Loaded profile config "embed-certs-213943": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:21:46.879909  200913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:21:46.897635  200913 main.go:141] libmachine: Using SSH client type: native
	I1018 18:21:46.897960  200913 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1018 18:21:46.897981  200913 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 18:21:47.169766  200913 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 18:21:47.169791  200913 machine.go:96] duration metric: took 4.641654481s to provisionDockerMachine
	I1018 18:21:47.169801  200913 client.go:171] duration metric: took 11.92108129s to LocalClient.Create
	I1018 18:21:47.169838  200913 start.go:167] duration metric: took 11.921148064s to libmachine.API.Create "embed-certs-213943"
	I1018 18:21:47.169853  200913 start.go:293] postStartSetup for "embed-certs-213943" (driver="docker")
	I1018 18:21:47.169863  200913 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 18:21:47.169958  200913 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 18:21:47.170033  200913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:21:47.188577  200913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa Username:docker}
	I1018 18:21:47.297315  200913 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 18:21:47.300869  200913 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 18:21:47.300898  200913 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 18:21:47.300910  200913 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 18:21:47.300990  200913 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 18:21:47.301088  200913 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 18:21:47.301199  200913 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 18:21:47.308745  200913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:21:47.328226  200913 start.go:296] duration metric: took 158.358128ms for postStartSetup
	I1018 18:21:47.328593  200913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-213943
	I1018 18:21:47.345875  200913 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/config.json ...
	I1018 18:21:47.346190  200913 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 18:21:47.346235  200913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:21:47.365877  200913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa Username:docker}
	I1018 18:21:47.466029  200913 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 18:21:47.470923  200913 start.go:128] duration metric: took 12.225873956s to createHost
	I1018 18:21:47.470948  200913 start.go:83] releasing machines lock for "embed-certs-213943", held for 12.226018047s
	I1018 18:21:47.471015  200913 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-213943
	I1018 18:21:47.491057  200913 ssh_runner.go:195] Run: cat /version.json
	I1018 18:21:47.491139  200913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:21:47.491163  200913 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 18:21:47.491217  200913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:21:47.518977  200913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa Username:docker}
	I1018 18:21:47.529062  200913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa Username:docker}
	I1018 18:21:47.624685  200913 ssh_runner.go:195] Run: systemctl --version
	I1018 18:21:47.718325  200913 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 18:21:47.759605  200913 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 18:21:47.764456  200913 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 18:21:47.764525  200913 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 18:21:47.797145  200913 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 18:21:47.797178  200913 start.go:495] detecting cgroup driver to use...
	I1018 18:21:47.797219  200913 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 18:21:47.797278  200913 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 18:21:47.814823  200913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 18:21:47.828729  200913 docker.go:218] disabling cri-docker service (if available) ...
	I1018 18:21:47.828794  200913 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 18:21:47.847231  200913 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 18:21:47.866004  200913 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 18:21:47.989738  200913 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 18:21:48.137801  200913 docker.go:234] disabling docker service ...
	I1018 18:21:48.137907  200913 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 18:21:48.163007  200913 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 18:21:48.177889  200913 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 18:21:48.306880  200913 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 18:21:48.436411  200913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 18:21:48.451586  200913 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 18:21:48.469225  200913 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 18:21:48.469336  200913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:21:48.478697  200913 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 18:21:48.478767  200913 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:21:48.491854  200913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:21:48.502120  200913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:21:48.513017  200913 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 18:21:48.522569  200913 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:21:48.532183  200913 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:21:48.547131  200913 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:21:48.556285  200913 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 18:21:48.564613  200913 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 18:21:48.572450  200913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:21:48.693104  200913 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 18:21:48.831708  200913 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 18:21:48.831830  200913 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 18:21:48.835868  200913 start.go:563] Will wait 60s for crictl version
	I1018 18:21:48.835999  200913 ssh_runner.go:195] Run: which crictl
	I1018 18:21:48.839555  200913 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 18:21:48.865630  200913 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 18:21:48.865780  200913 ssh_runner.go:195] Run: crio --version
	I1018 18:21:48.896047  200913 ssh_runner.go:195] Run: crio --version
	I1018 18:21:48.927723  200913 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 18:21:48.930511  200913 cli_runner.go:164] Run: docker network inspect embed-certs-213943 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:21:48.946221  200913 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 18:21:48.950422  200913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:21:48.965296  200913 kubeadm.go:883] updating cluster {Name:embed-certs-213943 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-213943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 18:21:48.965435  200913 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:21:48.965497  200913 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:21:49.000547  200913 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:21:49.000577  200913 crio.go:433] Images already preloaded, skipping extraction
	I1018 18:21:49.000672  200913 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:21:49.030853  200913 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:21:49.030878  200913 cache_images.go:85] Images are preloaded, skipping loading
	I1018 18:21:49.030886  200913 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 18:21:49.030984  200913 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-213943 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-213943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 18:21:49.031071  200913 ssh_runner.go:195] Run: crio config
	I1018 18:21:49.097604  200913 cni.go:84] Creating CNI manager for ""
	I1018 18:21:49.097625  200913 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:21:49.097644  200913 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 18:21:49.097691  200913 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-213943 NodeName:embed-certs-213943 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 18:21:49.097853  200913 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-213943"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 18:21:49.097925  200913 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 18:21:49.105702  200913 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 18:21:49.105778  200913 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 18:21:49.113556  200913 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 18:21:49.126615  200913 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 18:21:49.140361  200913 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1018 18:21:49.154314  200913 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 18:21:49.158309  200913 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:21:49.168515  200913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:21:49.287614  200913 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:21:49.306653  200913 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943 for IP: 192.168.85.2
	I1018 18:21:49.306687  200913 certs.go:195] generating shared ca certs ...
	I1018 18:21:49.306703  200913 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:21:49.306857  200913 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 18:21:49.306922  200913 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 18:21:49.306934  200913 certs.go:257] generating profile certs ...
	I1018 18:21:49.307002  200913 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/client.key
	I1018 18:21:49.307019  200913 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/client.crt with IP's: []
	I1018 18:21:49.769916  200913 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/client.crt ...
	I1018 18:21:49.769951  200913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/client.crt: {Name:mk8d5c9baa5e82e2e1ba9f68920e9dd47cdb37ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:21:49.770155  200913 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/client.key ...
	I1018 18:21:49.770168  200913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/client.key: {Name:mk5ee5f72d8a0c4846f7db58ac10582590f1f581 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:21:49.770264  200913 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/apiserver.key.b72dfec4
	I1018 18:21:49.770284  200913 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/apiserver.crt.b72dfec4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1018 18:21:49.957095  200913 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/apiserver.crt.b72dfec4 ...
	I1018 18:21:49.957125  200913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/apiserver.crt.b72dfec4: {Name:mke0e48b028b90f0410e48e1e966a27604faa989 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:21:49.957313  200913 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/apiserver.key.b72dfec4 ...
	I1018 18:21:49.957329  200913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/apiserver.key.b72dfec4: {Name:mk39741976aca409e95f74223015a5ce0302314e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:21:49.957414  200913 certs.go:382] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/apiserver.crt.b72dfec4 -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/apiserver.crt
	I1018 18:21:49.957507  200913 certs.go:386] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/apiserver.key.b72dfec4 -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/apiserver.key
	I1018 18:21:49.957577  200913 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/proxy-client.key
	I1018 18:21:49.957594  200913 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/proxy-client.crt with IP's: []
	I1018 18:21:50.123830  200913 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/proxy-client.crt ...
	I1018 18:21:50.123860  200913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/proxy-client.crt: {Name:mka51cc41c7c99be26519f5a3fed6d8555025894 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:21:50.124044  200913 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/proxy-client.key ...
	I1018 18:21:50.124061  200913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/proxy-client.key: {Name:mk136ad1a8377fcfa7539611d02b7346e3d9693d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:21:50.124297  200913 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 18:21:50.124341  200913 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 18:21:50.124356  200913 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 18:21:50.124414  200913 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 18:21:50.124456  200913 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 18:21:50.124484  200913 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 18:21:50.124532  200913 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:21:50.125139  200913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 18:21:50.144917  200913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 18:21:50.165467  200913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 18:21:50.185549  200913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 18:21:50.202944  200913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 18:21:50.221528  200913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 18:21:50.240785  200913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 18:21:50.260634  200913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 18:21:50.277964  200913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 18:21:50.295819  200913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 18:21:50.313085  200913 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 18:21:50.332285  200913 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 18:21:50.345475  200913 ssh_runner.go:195] Run: openssl version
	I1018 18:21:50.351892  200913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 18:21:50.361771  200913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:21:50.365706  200913 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:21:50.365818  200913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:21:50.407085  200913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 18:21:50.415866  200913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 18:21:50.424523  200913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 18:21:50.428377  200913 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 18:21:50.428441  200913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 18:21:50.469863  200913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 18:21:50.478017  200913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 18:21:50.486316  200913 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 18:21:50.490035  200913 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 18:21:50.490093  200913 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 18:21:50.532114  200913 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 18:21:50.541139  200913 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 18:21:50.545049  200913 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 18:21:50.545136  200913 kubeadm.go:400] StartCluster: {Name:embed-certs-213943 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-213943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:21:50.545216  200913 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 18:21:50.545286  200913 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 18:21:50.577998  200913 cri.go:89] found id: ""
	I1018 18:21:50.578070  200913 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 18:21:50.585930  200913 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 18:21:50.593908  200913 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 18:21:50.593992  200913 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 18:21:50.601852  200913 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 18:21:50.601871  200913 kubeadm.go:157] found existing configuration files:
	
	I1018 18:21:50.601944  200913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 18:21:50.610042  200913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 18:21:50.610106  200913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 18:21:50.617652  200913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 18:21:50.625777  200913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 18:21:50.625848  200913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 18:21:50.633512  200913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 18:21:50.641328  200913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 18:21:50.641398  200913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 18:21:50.648981  200913 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 18:21:50.657617  200913 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 18:21:50.657719  200913 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 18:21:50.665767  200913 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 18:21:50.715020  200913 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 18:21:50.715086  200913 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 18:21:50.752302  200913 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 18:21:50.752381  200913 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 18:21:50.752427  200913 kubeadm.go:318] OS: Linux
	I1018 18:21:50.752510  200913 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 18:21:50.752564  200913 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 18:21:50.752617  200913 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 18:21:50.752679  200913 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 18:21:50.752732  200913 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 18:21:50.752788  200913 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 18:21:50.752838  200913 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 18:21:50.752892  200913 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 18:21:50.752958  200913 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 18:21:50.843835  200913 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 18:21:50.844005  200913 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 18:21:50.844128  200913 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 18:21:50.852294  200913 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1018 18:21:47.253460  197575 node_ready.go:57] node "default-k8s-diff-port-192562" has "Ready":"False" status (will retry)
	W1018 18:21:49.255483  197575 node_ready.go:57] node "default-k8s-diff-port-192562" has "Ready":"False" status (will retry)
	W1018 18:21:51.753730  197575 node_ready.go:57] node "default-k8s-diff-port-192562" has "Ready":"False" status (will retry)
	I1018 18:21:50.856707  200913 out.go:252]   - Generating certificates and keys ...
	I1018 18:21:50.856884  200913 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 18:21:50.856999  200913 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 18:21:51.077586  200913 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 18:21:51.600840  200913 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 18:21:51.913073  200913 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 18:21:52.275791  200913 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 18:21:53.426745  200913 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 18:21:53.426898  200913 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-213943 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 18:21:54.071391  200913 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 18:21:54.071544  200913 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-213943 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 18:21:54.458120  200913 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	W1018 18:21:53.754880  197575 node_ready.go:57] node "default-k8s-diff-port-192562" has "Ready":"False" status (will retry)
	W1018 18:21:56.253843  197575 node_ready.go:57] node "default-k8s-diff-port-192562" has "Ready":"False" status (will retry)
	I1018 18:21:55.708923  200913 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 18:21:57.221702  200913 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 18:21:57.221949  200913 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 18:21:57.683086  200913 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 18:21:57.917594  200913 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 18:21:58.440825  200913 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 18:21:58.873793  200913 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 18:21:59.998418  200913 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 18:21:59.999684  200913 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 18:22:00.004280  200913 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1018 18:21:58.254936  197575 node_ready.go:57] node "default-k8s-diff-port-192562" has "Ready":"False" status (will retry)
	W1018 18:22:00.265908  197575 node_ready.go:57] node "default-k8s-diff-port-192562" has "Ready":"False" status (will retry)
	I1018 18:22:00.013251  200913 out.go:252]   - Booting up control plane ...
	I1018 18:22:00.029445  200913 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 18:22:00.029653  200913 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 18:22:00.029781  200913 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 18:22:00.118374  200913 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 18:22:00.118493  200913 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 18:22:00.152455  200913 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 18:22:00.158054  200913 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 18:22:00.158137  200913 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 18:22:00.406789  200913 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 18:22:00.406919  200913 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 18:22:02.409341  200913 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.000932476s
	I1018 18:22:02.409471  200913 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 18:22:02.409568  200913 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1018 18:22:02.409697  200913 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 18:22:02.409790  200913 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1018 18:22:02.754207  197575 node_ready.go:57] node "default-k8s-diff-port-192562" has "Ready":"False" status (will retry)
	W1018 18:22:05.254379  197575 node_ready.go:57] node "default-k8s-diff-port-192562" has "Ready":"False" status (will retry)
	I1018 18:22:05.421785  200913 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.012161148s
	I1018 18:22:06.756497  200913 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.347348263s
	I1018 18:22:08.410829  200913 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001642969s
	I1018 18:22:08.432834  200913 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 18:22:08.448562  200913 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 18:22:08.464697  200913 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 18:22:08.465014  200913 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-213943 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 18:22:08.487422  200913 kubeadm.go:318] [bootstrap-token] Using token: nuevvl.n8vde9hzni8yeejn
	I1018 18:22:08.490337  200913 out.go:252]   - Configuring RBAC rules ...
	I1018 18:22:08.490503  200913 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 18:22:08.495546  200913 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 18:22:08.509623  200913 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 18:22:08.514564  200913 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 18:22:08.519122  200913 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 18:22:08.523248  200913 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 18:22:08.818432  200913 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 18:22:09.268714  200913 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 18:22:09.818109  200913 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 18:22:09.819567  200913 kubeadm.go:318] 
	I1018 18:22:09.819649  200913 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 18:22:09.819655  200913 kubeadm.go:318] 
	I1018 18:22:09.819789  200913 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 18:22:09.819798  200913 kubeadm.go:318] 
	I1018 18:22:09.819823  200913 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 18:22:09.819884  200913 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 18:22:09.819949  200913 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 18:22:09.819958  200913 kubeadm.go:318] 
	I1018 18:22:09.820011  200913 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 18:22:09.820016  200913 kubeadm.go:318] 
	I1018 18:22:09.820063  200913 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 18:22:09.820067  200913 kubeadm.go:318] 
	I1018 18:22:09.820120  200913 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 18:22:09.820194  200913 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 18:22:09.820294  200913 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 18:22:09.820309  200913 kubeadm.go:318] 
	I1018 18:22:09.820399  200913 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 18:22:09.820494  200913 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 18:22:09.820506  200913 kubeadm.go:318] 
	I1018 18:22:09.820594  200913 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token nuevvl.n8vde9hzni8yeejn \
	I1018 18:22:09.820715  200913 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d0244c5bf86cdf97546c6a22045cb6ed9d7ead524d9c98d9ca35da77d5d7a04d \
	I1018 18:22:09.820743  200913 kubeadm.go:318] 	--control-plane 
	I1018 18:22:09.820748  200913 kubeadm.go:318] 
	I1018 18:22:09.820846  200913 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 18:22:09.820856  200913 kubeadm.go:318] 
	I1018 18:22:09.821098  200913 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token nuevvl.n8vde9hzni8yeejn \
	I1018 18:22:09.821223  200913 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d0244c5bf86cdf97546c6a22045cb6ed9d7ead524d9c98d9ca35da77d5d7a04d 
	I1018 18:22:09.825468  200913 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 18:22:09.825704  200913 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 18:22:09.825820  200913 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 18:22:09.825837  200913 cni.go:84] Creating CNI manager for ""
	I1018 18:22:09.825845  200913 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:22:09.829119  200913 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 18:22:09.832019  200913 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 18:22:09.836312  200913 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 18:22:09.836334  200913 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 18:22:09.850376  200913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	W1018 18:22:07.753360  197575 node_ready.go:57] node "default-k8s-diff-port-192562" has "Ready":"False" status (will retry)
	W1018 18:22:09.754020  197575 node_ready.go:57] node "default-k8s-diff-port-192562" has "Ready":"False" status (will retry)
	I1018 18:22:10.166298  200913 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 18:22:10.166448  200913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:22:10.166526  200913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-213943 minikube.k8s.io/updated_at=2025_10_18T18_22_10_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404 minikube.k8s.io/name=embed-certs-213943 minikube.k8s.io/primary=true
	I1018 18:22:10.362047  200913 ops.go:34] apiserver oom_adj: -16
	I1018 18:22:10.362160  200913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:22:10.862736  200913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:22:11.362982  200913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:22:11.863059  200913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:22:12.363078  200913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:22:12.862271  200913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:22:13.362983  200913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:22:13.862274  200913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:22:14.362981  200913 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:22:14.462627  200913 kubeadm.go:1113] duration metric: took 4.296227613s to wait for elevateKubeSystemPrivileges
	I1018 18:22:14.462652  200913 kubeadm.go:402] duration metric: took 23.917519713s to StartCluster
	I1018 18:22:14.462667  200913 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:22:14.462725  200913 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:22:14.464057  200913 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:22:14.464256  200913 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 18:22:14.464388  200913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 18:22:14.464662  200913 config.go:182] Loaded profile config "embed-certs-213943": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:22:14.464698  200913 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 18:22:14.464755  200913 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-213943"
	I1018 18:22:14.465118  200913 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-213943"
	I1018 18:22:14.465143  200913 host.go:66] Checking if "embed-certs-213943" exists ...
	I1018 18:22:14.465381  200913 addons.go:69] Setting default-storageclass=true in profile "embed-certs-213943"
	I1018 18:22:14.465400  200913 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-213943"
	I1018 18:22:14.465741  200913 cli_runner.go:164] Run: docker container inspect embed-certs-213943 --format={{.State.Status}}
	I1018 18:22:14.466751  200913 cli_runner.go:164] Run: docker container inspect embed-certs-213943 --format={{.State.Status}}
	I1018 18:22:14.467753  200913 out.go:179] * Verifying Kubernetes components...
	I1018 18:22:14.472909  200913 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:22:14.508371  200913 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:22:14.511766  200913 addons.go:238] Setting addon default-storageclass=true in "embed-certs-213943"
	I1018 18:22:14.511806  200913 host.go:66] Checking if "embed-certs-213943" exists ...
	I1018 18:22:14.512222  200913 cli_runner.go:164] Run: docker container inspect embed-certs-213943 --format={{.State.Status}}
	I1018 18:22:14.512391  200913 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:22:14.512401  200913 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 18:22:14.512436  200913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:22:14.546228  200913 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 18:22:14.546249  200913 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 18:22:14.546327  200913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:22:14.551155  200913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa Username:docker}
	I1018 18:22:14.579592  200913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa Username:docker}
	I1018 18:22:14.845499  200913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:22:14.870421  200913 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 18:22:14.870595  200913 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:22:14.929537  200913 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 18:22:15.718109  200913 node_ready.go:35] waiting up to 6m0s for node "embed-certs-213943" to be "Ready" ...
	I1018 18:22:15.718219  200913 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1018 18:22:15.765148  200913 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1018 18:22:12.253601  197575 node_ready.go:57] node "default-k8s-diff-port-192562" has "Ready":"False" status (will retry)
	W1018 18:22:14.253649  197575 node_ready.go:57] node "default-k8s-diff-port-192562" has "Ready":"False" status (will retry)
	W1018 18:22:16.253789  197575 node_ready.go:57] node "default-k8s-diff-port-192562" has "Ready":"False" status (will retry)
	I1018 18:22:15.768012  200913 addons.go:514] duration metric: took 1.303297119s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 18:22:16.222896  200913 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-213943" context rescaled to 1 replicas
	W1018 18:22:17.722050  200913 node_ready.go:57] node "embed-certs-213943" has "Ready":"False" status (will retry)
	W1018 18:22:19.722752  200913 node_ready.go:57] node "embed-certs-213943" has "Ready":"False" status (will retry)
	W1018 18:22:18.254170  197575 node_ready.go:57] node "default-k8s-diff-port-192562" has "Ready":"False" status (will retry)
	W1018 18:22:20.753662  197575 node_ready.go:57] node "default-k8s-diff-port-192562" has "Ready":"False" status (will retry)
	W1018 18:22:22.222356  200913 node_ready.go:57] node "embed-certs-213943" has "Ready":"False" status (will retry)
	W1018 18:22:24.222795  200913 node_ready.go:57] node "embed-certs-213943" has "Ready":"False" status (will retry)
	W1018 18:22:22.753937  197575 node_ready.go:57] node "default-k8s-diff-port-192562" has "Ready":"False" status (will retry)
	W1018 18:22:25.254457  197575 node_ready.go:57] node "default-k8s-diff-port-192562" has "Ready":"False" status (will retry)
	I1018 18:22:25.753633  197575 node_ready.go:49] node "default-k8s-diff-port-192562" is "Ready"
	I1018 18:22:25.753661  197575 node_ready.go:38] duration metric: took 40.503163712s for node "default-k8s-diff-port-192562" to be "Ready" ...
	I1018 18:22:25.753674  197575 api_server.go:52] waiting for apiserver process to appear ...
	I1018 18:22:25.753739  197575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 18:22:25.765478  197575 api_server.go:72] duration metric: took 41.476998312s to wait for apiserver process to appear ...
	I1018 18:22:25.765505  197575 api_server.go:88] waiting for apiserver healthz status ...
	I1018 18:22:25.765532  197575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1018 18:22:25.773723  197575 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1018 18:22:25.774746  197575 api_server.go:141] control plane version: v1.34.1
	I1018 18:22:25.774774  197575 api_server.go:131] duration metric: took 9.260919ms to wait for apiserver health ...
	I1018 18:22:25.774784  197575 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 18:22:25.780227  197575 system_pods.go:59] 8 kube-system pods found
	I1018 18:22:25.780279  197575 system_pods.go:61] "coredns-66bc5c9577-psj29" [31c59339-b043-4ff8-858f-bd618113dfa3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:22:25.780287  197575 system_pods.go:61] "etcd-default-k8s-diff-port-192562" [293bcf9f-af57-493e-b216-e56b6a7b89ed] Running
	I1018 18:22:25.780294  197575 system_pods.go:61] "kindnet-6vrvc" [960d4013-a055-439a-8a17-791353ced8cb] Running
	I1018 18:22:25.780299  197575 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-192562" [91aff442-efe3-4eb6-ae1a-f0f26d96166b] Running
	I1018 18:22:25.780308  197575 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-192562" [19f2767f-1e6c-4ddf-9737-2a6a11ab87af] Running
	I1018 18:22:25.780312  197575 system_pods.go:61] "kube-proxy-c7jft" [56012359-4b0e-4ac5-b6c5-816c0b0c7063] Running
	I1018 18:22:25.780319  197575 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-192562" [4a166553-f0f4-4e61-b4d4-6c8ce69e82d4] Running
	I1018 18:22:25.780326  197575 system_pods.go:61] "storage-provisioner" [ee029b71-681a-4716-ad5f-c699bd315801] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 18:22:25.780332  197575 system_pods.go:74] duration metric: took 5.542643ms to wait for pod list to return data ...
	I1018 18:22:25.780345  197575 default_sa.go:34] waiting for default service account to be created ...
	I1018 18:22:25.790297  197575 default_sa.go:45] found service account: "default"
	I1018 18:22:25.790324  197575 default_sa.go:55] duration metric: took 9.96853ms for default service account to be created ...
	I1018 18:22:25.790335  197575 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 18:22:25.801482  197575 system_pods.go:86] 8 kube-system pods found
	I1018 18:22:25.801523  197575 system_pods.go:89] "coredns-66bc5c9577-psj29" [31c59339-b043-4ff8-858f-bd618113dfa3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:22:25.801531  197575 system_pods.go:89] "etcd-default-k8s-diff-port-192562" [293bcf9f-af57-493e-b216-e56b6a7b89ed] Running
	I1018 18:22:25.801537  197575 system_pods.go:89] "kindnet-6vrvc" [960d4013-a055-439a-8a17-791353ced8cb] Running
	I1018 18:22:25.801543  197575 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-192562" [91aff442-efe3-4eb6-ae1a-f0f26d96166b] Running
	I1018 18:22:25.801548  197575 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-192562" [19f2767f-1e6c-4ddf-9737-2a6a11ab87af] Running
	I1018 18:22:25.801552  197575 system_pods.go:89] "kube-proxy-c7jft" [56012359-4b0e-4ac5-b6c5-816c0b0c7063] Running
	I1018 18:22:25.801557  197575 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-192562" [4a166553-f0f4-4e61-b4d4-6c8ce69e82d4] Running
	I1018 18:22:25.801563  197575 system_pods.go:89] "storage-provisioner" [ee029b71-681a-4716-ad5f-c699bd315801] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 18:22:25.801589  197575 retry.go:31] will retry after 308.174363ms: missing components: kube-dns
	I1018 18:22:26.114693  197575 system_pods.go:86] 8 kube-system pods found
	I1018 18:22:26.114726  197575 system_pods.go:89] "coredns-66bc5c9577-psj29" [31c59339-b043-4ff8-858f-bd618113dfa3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:22:26.114734  197575 system_pods.go:89] "etcd-default-k8s-diff-port-192562" [293bcf9f-af57-493e-b216-e56b6a7b89ed] Running
	I1018 18:22:26.114742  197575 system_pods.go:89] "kindnet-6vrvc" [960d4013-a055-439a-8a17-791353ced8cb] Running
	I1018 18:22:26.114747  197575 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-192562" [91aff442-efe3-4eb6-ae1a-f0f26d96166b] Running
	I1018 18:22:26.114752  197575 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-192562" [19f2767f-1e6c-4ddf-9737-2a6a11ab87af] Running
	I1018 18:22:26.114756  197575 system_pods.go:89] "kube-proxy-c7jft" [56012359-4b0e-4ac5-b6c5-816c0b0c7063] Running
	I1018 18:22:26.114765  197575 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-192562" [4a166553-f0f4-4e61-b4d4-6c8ce69e82d4] Running
	I1018 18:22:26.114772  197575 system_pods.go:89] "storage-provisioner" [ee029b71-681a-4716-ad5f-c699bd315801] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 18:22:26.114788  197575 retry.go:31] will retry after 292.433391ms: missing components: kube-dns
	I1018 18:22:26.411565  197575 system_pods.go:86] 8 kube-system pods found
	I1018 18:22:26.411599  197575 system_pods.go:89] "coredns-66bc5c9577-psj29" [31c59339-b043-4ff8-858f-bd618113dfa3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:22:26.411606  197575 system_pods.go:89] "etcd-default-k8s-diff-port-192562" [293bcf9f-af57-493e-b216-e56b6a7b89ed] Running
	I1018 18:22:26.411636  197575 system_pods.go:89] "kindnet-6vrvc" [960d4013-a055-439a-8a17-791353ced8cb] Running
	I1018 18:22:26.411646  197575 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-192562" [91aff442-efe3-4eb6-ae1a-f0f26d96166b] Running
	I1018 18:22:26.411651  197575 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-192562" [19f2767f-1e6c-4ddf-9737-2a6a11ab87af] Running
	I1018 18:22:26.411656  197575 system_pods.go:89] "kube-proxy-c7jft" [56012359-4b0e-4ac5-b6c5-816c0b0c7063] Running
	I1018 18:22:26.411661  197575 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-192562" [4a166553-f0f4-4e61-b4d4-6c8ce69e82d4] Running
	I1018 18:22:26.411671  197575 system_pods.go:89] "storage-provisioner" [ee029b71-681a-4716-ad5f-c699bd315801] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 18:22:26.411700  197575 retry.go:31] will retry after 313.463093ms: missing components: kube-dns
	I1018 18:22:26.729335  197575 system_pods.go:86] 8 kube-system pods found
	I1018 18:22:26.729374  197575 system_pods.go:89] "coredns-66bc5c9577-psj29" [31c59339-b043-4ff8-858f-bd618113dfa3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:22:26.729382  197575 system_pods.go:89] "etcd-default-k8s-diff-port-192562" [293bcf9f-af57-493e-b216-e56b6a7b89ed] Running
	I1018 18:22:26.729391  197575 system_pods.go:89] "kindnet-6vrvc" [960d4013-a055-439a-8a17-791353ced8cb] Running
	I1018 18:22:26.729396  197575 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-192562" [91aff442-efe3-4eb6-ae1a-f0f26d96166b] Running
	I1018 18:22:26.729400  197575 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-192562" [19f2767f-1e6c-4ddf-9737-2a6a11ab87af] Running
	I1018 18:22:26.729405  197575 system_pods.go:89] "kube-proxy-c7jft" [56012359-4b0e-4ac5-b6c5-816c0b0c7063] Running
	I1018 18:22:26.729410  197575 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-192562" [4a166553-f0f4-4e61-b4d4-6c8ce69e82d4] Running
	I1018 18:22:26.729421  197575 system_pods.go:89] "storage-provisioner" [ee029b71-681a-4716-ad5f-c699bd315801] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 18:22:26.729443  197575 retry.go:31] will retry after 418.104027ms: missing components: kube-dns
	I1018 18:22:27.152167  197575 system_pods.go:86] 8 kube-system pods found
	I1018 18:22:27.152204  197575 system_pods.go:89] "coredns-66bc5c9577-psj29" [31c59339-b043-4ff8-858f-bd618113dfa3] Running
	I1018 18:22:27.152211  197575 system_pods.go:89] "etcd-default-k8s-diff-port-192562" [293bcf9f-af57-493e-b216-e56b6a7b89ed] Running
	I1018 18:22:27.152219  197575 system_pods.go:89] "kindnet-6vrvc" [960d4013-a055-439a-8a17-791353ced8cb] Running
	I1018 18:22:27.152224  197575 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-192562" [91aff442-efe3-4eb6-ae1a-f0f26d96166b] Running
	I1018 18:22:27.152229  197575 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-192562" [19f2767f-1e6c-4ddf-9737-2a6a11ab87af] Running
	I1018 18:22:27.152267  197575 system_pods.go:89] "kube-proxy-c7jft" [56012359-4b0e-4ac5-b6c5-816c0b0c7063] Running
	I1018 18:22:27.152278  197575 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-192562" [4a166553-f0f4-4e61-b4d4-6c8ce69e82d4] Running
	I1018 18:22:27.152284  197575 system_pods.go:89] "storage-provisioner" [ee029b71-681a-4716-ad5f-c699bd315801] Running
	I1018 18:22:27.152292  197575 system_pods.go:126] duration metric: took 1.361951553s to wait for k8s-apps to be running ...
	I1018 18:22:27.152304  197575 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 18:22:27.152372  197575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:22:27.165875  197575 system_svc.go:56] duration metric: took 13.561667ms WaitForService to wait for kubelet
	I1018 18:22:27.165904  197575 kubeadm.go:586] duration metric: took 42.877428126s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:22:27.165923  197575 node_conditions.go:102] verifying NodePressure condition ...
	I1018 18:22:27.168866  197575 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 18:22:27.168898  197575 node_conditions.go:123] node cpu capacity is 2
	I1018 18:22:27.168911  197575 node_conditions.go:105] duration metric: took 2.982684ms to run NodePressure ...
	I1018 18:22:27.169029  197575 start.go:241] waiting for startup goroutines ...
	I1018 18:22:27.169048  197575 start.go:246] waiting for cluster config update ...
	I1018 18:22:27.169061  197575 start.go:255] writing updated cluster config ...
	I1018 18:22:27.169366  197575 ssh_runner.go:195] Run: rm -f paused
	I1018 18:22:27.172894  197575 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:22:27.177768  197575 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-psj29" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:22:27.182726  197575 pod_ready.go:94] pod "coredns-66bc5c9577-psj29" is "Ready"
	I1018 18:22:27.182755  197575 pod_ready.go:86] duration metric: took 4.960391ms for pod "coredns-66bc5c9577-psj29" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:22:27.185203  197575 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-192562" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:22:27.189779  197575 pod_ready.go:94] pod "etcd-default-k8s-diff-port-192562" is "Ready"
	I1018 18:22:27.189851  197575 pod_ready.go:86] duration metric: took 4.623296ms for pod "etcd-default-k8s-diff-port-192562" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:22:27.192100  197575 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-192562" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:22:27.196203  197575 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-192562" is "Ready"
	I1018 18:22:27.196229  197575 pod_ready.go:86] duration metric: took 4.103412ms for pod "kube-apiserver-default-k8s-diff-port-192562" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:22:27.198746  197575 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-192562" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:22:27.576573  197575 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-192562" is "Ready"
	I1018 18:22:27.576599  197575 pod_ready.go:86] duration metric: took 377.829454ms for pod "kube-controller-manager-default-k8s-diff-port-192562" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:22:27.778823  197575 pod_ready.go:83] waiting for pod "kube-proxy-c7jft" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:22:28.177561  197575 pod_ready.go:94] pod "kube-proxy-c7jft" is "Ready"
	I1018 18:22:28.177594  197575 pod_ready.go:86] duration metric: took 398.744586ms for pod "kube-proxy-c7jft" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:22:28.377867  197575 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-192562" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:22:28.777463  197575 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-192562" is "Ready"
	I1018 18:22:28.777540  197575 pod_ready.go:86] duration metric: took 399.642394ms for pod "kube-scheduler-default-k8s-diff-port-192562" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:22:28.777569  197575 pod_ready.go:40] duration metric: took 1.604643622s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:22:28.854003  197575 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 18:22:28.857336  197575 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-192562" cluster and "default" namespace by default
	W1018 18:22:26.722725  200913 node_ready.go:57] node "embed-certs-213943" has "Ready":"False" status (will retry)
	W1018 18:22:29.222495  200913 node_ready.go:57] node "embed-certs-213943" has "Ready":"False" status (will retry)
	W1018 18:22:31.223105  200913 node_ready.go:57] node "embed-certs-213943" has "Ready":"False" status (will retry)
	W1018 18:22:33.722178  200913 node_ready.go:57] node "embed-certs-213943" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 18 18:22:25 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:25.888322631Z" level=info msg="Created container 69affa7467cbbd42b7cf087cc7a76f62c293522d25851168179c1273758b25e1: kube-system/coredns-66bc5c9577-psj29/coredns" id=290f7b8f-bfd7-40f9-a77d-a2945845a601 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:22:25 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:25.889426538Z" level=info msg="Starting container: 69affa7467cbbd42b7cf087cc7a76f62c293522d25851168179c1273758b25e1" id=7142d7d1-79b6-4703-9d79-2def3c011823 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:22:25 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:25.897392466Z" level=info msg="Started container" PID=1734 containerID=69affa7467cbbd42b7cf087cc7a76f62c293522d25851168179c1273758b25e1 description=kube-system/coredns-66bc5c9577-psj29/coredns id=7142d7d1-79b6-4703-9d79-2def3c011823 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d1e20a2d401881ff166c3cad22816cafc9462d1819a2ba24fd89a79856f847f4
	Oct 18 18:22:29 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:29.405227805Z" level=info msg="Running pod sandbox: default/busybox/POD" id=e2a1bf32-2ed9-4862-85b5-f092d1ee308e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 18:22:29 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:29.405296532Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:22:29 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:29.410263676Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7a26c9428382fbb27a9bb3a21800ca81926912f9cd7450d545a87ef5697a1c1d UID:3e78f628-7e13-41fa-9490-a4c4f9ae21c7 NetNS:/var/run/netns/42916669-86b0-455e-8e13-d369d4291fa2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078e98}] Aliases:map[]}"
	Oct 18 18:22:29 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:29.410417024Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 18:22:29 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:29.423296639Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7a26c9428382fbb27a9bb3a21800ca81926912f9cd7450d545a87ef5697a1c1d UID:3e78f628-7e13-41fa-9490-a4c4f9ae21c7 NetNS:/var/run/netns/42916669-86b0-455e-8e13-d369d4291fa2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078e98}] Aliases:map[]}"
	Oct 18 18:22:29 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:29.423442717Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 18:22:29 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:29.426043874Z" level=info msg="Ran pod sandbox 7a26c9428382fbb27a9bb3a21800ca81926912f9cd7450d545a87ef5697a1c1d with infra container: default/busybox/POD" id=e2a1bf32-2ed9-4862-85b5-f092d1ee308e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 18:22:29 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:29.429423044Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=767b85ea-30f8-4816-bd12-06d101313b14 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:22:29 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:29.429577425Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=767b85ea-30f8-4816-bd12-06d101313b14 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:22:29 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:29.42964543Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=767b85ea-30f8-4816-bd12-06d101313b14 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:22:29 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:29.432645354Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=99e67c1f-a7ab-4725-b751-3ca002837b6c name=/runtime.v1.ImageService/PullImage
	Oct 18 18:22:29 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:29.436682Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 18:22:31 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:31.441629328Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=99e67c1f-a7ab-4725-b751-3ca002837b6c name=/runtime.v1.ImageService/PullImage
	Oct 18 18:22:31 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:31.442367996Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bc616149-ccbf-4a80-bcf1-afa1db0d35ed name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:22:31 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:31.445988678Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=498dfadf-1541-4e81-a8a5-1f04c3a8dd29 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:22:31 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:31.451616171Z" level=info msg="Creating container: default/busybox/busybox" id=3f579003-f2ac-4584-ab2f-77007206b299 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:22:31 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:31.453368579Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:22:31 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:31.458088525Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:22:31 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:31.458928995Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:22:31 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:31.474646907Z" level=info msg="Created container c175251e16a68d382a5f36c4bb8d71552f427b1216b4a6e75091faf2e62dc47f: default/busybox/busybox" id=3f579003-f2ac-4584-ab2f-77007206b299 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:22:31 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:31.475760735Z" level=info msg="Starting container: c175251e16a68d382a5f36c4bb8d71552f427b1216b4a6e75091faf2e62dc47f" id=aad0506e-dcd6-4a90-8c4d-a1d2ad440f3d name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:22:31 default-k8s-diff-port-192562 crio[841]: time="2025-10-18T18:22:31.47787469Z" level=info msg="Started container" PID=1790 containerID=c175251e16a68d382a5f36c4bb8d71552f427b1216b4a6e75091faf2e62dc47f description=default/busybox/busybox id=aad0506e-dcd6-4a90-8c4d-a1d2ad440f3d name=/runtime.v1.RuntimeService/StartContainer sandboxID=7a26c9428382fbb27a9bb3a21800ca81926912f9cd7450d545a87ef5697a1c1d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	c175251e16a68       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   7a26c9428382f       busybox                                                default
	69affa7467cbb       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   d1e20a2d40188       coredns-66bc5c9577-psj29                               kube-system
	a6b6e1a78f1c3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   fa55256170575       storage-provisioner                                    kube-system
	cbf6d95cd739f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   f241a4946de57       kindnet-6vrvc                                          kube-system
	21f4069d97199       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   fd6468de15562       kube-proxy-c7jft                                       kube-system
	e91fea021acf0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   eb3bce3f6d140       kube-controller-manager-default-k8s-diff-port-192562   kube-system
	e34e5845627ab       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   72b23398f9293       kube-apiserver-default-k8s-diff-port-192562            kube-system
	d2fe2e4449cbb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   e310de1d9436c       etcd-default-k8s-diff-port-192562                      kube-system
	5dc814f6f6169       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   07503f976ab6b       kube-scheduler-default-k8s-diff-port-192562            kube-system
	
	
	==> coredns [69affa7467cbbd42b7cf087cc7a76f62c293522d25851168179c1273758b25e1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50983 - 26044 "HINFO IN 1217343980777730911.6482738757095384278. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022284946s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-192562
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-192562
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=default-k8s-diff-port-192562
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T18_21_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 18:21:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-192562
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 18:22:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 18:22:25 +0000   Sat, 18 Oct 2025 18:21:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 18:22:25 +0000   Sat, 18 Oct 2025 18:21:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 18:22:25 +0000   Sat, 18 Oct 2025 18:21:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 18:22:25 +0000   Sat, 18 Oct 2025 18:22:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-192562
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                c4581513-26ed-464e-afab-6c98e6b6fd18
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-psj29                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-default-k8s-diff-port-192562                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-6vrvc                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-default-k8s-diff-port-192562             250m (12%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-192562    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-c7jft                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-default-k8s-diff-port-192562             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   Starting                 71s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 71s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)  kubelet          Node default-k8s-diff-port-192562 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)  kubelet          Node default-k8s-diff-port-192562 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)  kubelet          Node default-k8s-diff-port-192562 status is now: NodeHasSufficientPID
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node default-k8s-diff-port-192562 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node default-k8s-diff-port-192562 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node default-k8s-diff-port-192562 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node default-k8s-diff-port-192562 event: Registered Node default-k8s-diff-port-192562 in Controller
	  Normal   NodeReady                14s                kubelet          Node default-k8s-diff-port-192562 status is now: NodeReady
	
	
	==> dmesg <==
	[ +33.320958] overlayfs: idmapped layers are currently not supported
	[Oct18 18:00] overlayfs: idmapped layers are currently not supported
	[Oct18 18:01] overlayfs: idmapped layers are currently not supported
	[Oct18 18:02] overlayfs: idmapped layers are currently not supported
	[Oct18 18:04] overlayfs: idmapped layers are currently not supported
	[ +24.403909] overlayfs: idmapped layers are currently not supported
	[  +6.162774] overlayfs: idmapped layers are currently not supported
	[Oct18 18:05] overlayfs: idmapped layers are currently not supported
	[ +25.128760] overlayfs: idmapped layers are currently not supported
	[Oct18 18:06] overlayfs: idmapped layers are currently not supported
	[Oct18 18:07] overlayfs: idmapped layers are currently not supported
	[Oct18 18:08] overlayfs: idmapped layers are currently not supported
	[Oct18 18:09] overlayfs: idmapped layers are currently not supported
	[Oct18 18:11] overlayfs: idmapped layers are currently not supported
	[Oct18 18:13] overlayfs: idmapped layers are currently not supported
	[ +30.969240] overlayfs: idmapped layers are currently not supported
	[Oct18 18:15] overlayfs: idmapped layers are currently not supported
	[Oct18 18:16] overlayfs: idmapped layers are currently not supported
	[Oct18 18:17] overlayfs: idmapped layers are currently not supported
	[ +23.167826] overlayfs: idmapped layers are currently not supported
	[Oct18 18:18] overlayfs: idmapped layers are currently not supported
	[ +38.509809] overlayfs: idmapped layers are currently not supported
	[Oct18 18:19] overlayfs: idmapped layers are currently not supported
	[Oct18 18:21] overlayfs: idmapped layers are currently not supported
	[Oct18 18:22] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d2fe2e4449cbb9c92cacc6b234f3355be4946c1addd97f27797577e935eed547] <==
	{"level":"warn","ts":"2025-10-18T18:21:34.233063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:21:34.245166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:21:34.266902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:21:34.281797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:21:34.313561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:21:34.327461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:21:34.358401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:21:34.389851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:21:34.421288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:21:34.465957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:21:34.491344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:21:34.510401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:21:34.558260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:21:34.574892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:21:34.593038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:21:34.614257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:21:34.642986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:21:34.680750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:21:34.707284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:21:34.753411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:21:34.788442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:21:34.825226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:21:34.843551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:21:34.973048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44158","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T18:21:39.982398Z","caller":"traceutil/trace.go:172","msg":"trace[318544622] transaction","detail":"{read_only:false; response_revision:303; number_of_response:1; }","duration":"104.922293ms","start":"2025-10-18T18:21:39.877461Z","end":"2025-10-18T18:21:39.982383Z","steps":["trace[318544622] 'process raft request'  (duration: 104.670828ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:22:39 up  2:05,  0 user,  load average: 2.06, 2.93, 2.72
	Linux default-k8s-diff-port-192562 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cbf6d95cd739f2c7555864b4124ae67d6ea943123d1a70fa535b264c82efc0a7] <==
	I1018 18:21:45.020828       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 18:21:45.022631       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 18:21:45.022771       1 main.go:148] setting mtu 1500 for CNI 
	I1018 18:21:45.022785       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 18:21:45.022798       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T18:21:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 18:21:45.345888       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 18:21:45.346120       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 18:21:45.346232       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 18:21:45.348055       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 18:22:15.346341       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 18:22:15.347656       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 18:22:15.347761       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 18:22:15.349042       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1018 18:22:17.046649       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 18:22:17.046774       1 metrics.go:72] Registering metrics
	I1018 18:22:17.046863       1 controller.go:711] "Syncing nftables rules"
	I1018 18:22:25.351382       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 18:22:25.351433       1 main.go:301] handling current node
	I1018 18:22:35.344926       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 18:22:35.345111       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e34e5845627ab2816c92ea85765383f29210421781479af4789b64d2814c3979] <==
	I1018 18:21:36.426031       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 18:21:36.426413       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 18:21:36.426511       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 18:21:36.429245       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 18:21:36.443695       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 18:21:36.444126       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 18:21:36.512818       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 18:21:36.982153       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 18:21:36.988825       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 18:21:36.988851       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 18:21:37.942330       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 18:21:38.071257       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 18:21:38.218747       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	I1018 18:21:38.229776       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1018 18:21:38.248382       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1018 18:21:38.250697       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 18:21:38.279125       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 18:21:39.413315       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 18:21:39.449889       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 18:21:39.471417       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 18:21:43.864709       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 18:21:43.870256       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 18:21:43.963838       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 18:21:44.112304       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1018 18:22:38.227461       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:33538: use of closed network connection
	
	
	==> kube-controller-manager [e91fea021acf0352f725f54654ec68079cf7a60ed1df0eddbefbb86a2bb3dd71] <==
	I1018 18:21:43.209075       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 18:21:43.209147       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 18:21:43.209316       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-192562" podCIDRs=["10.244.0.0/24"]
	I1018 18:21:43.210728       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 18:21:43.210825       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 18:21:43.211173       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 18:21:43.214047       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 18:21:43.214094       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 18:21:43.214128       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 18:21:43.214199       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 18:21:43.222601       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 18:21:43.222681       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 18:21:43.222774       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 18:21:43.223478       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 18:21:43.223542       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 18:21:43.223561       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 18:21:43.223911       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 18:21:43.228580       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 18:21:43.228606       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 18:21:43.235030       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 18:21:43.238672       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 18:21:43.238763       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 18:21:43.247152       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 18:21:43.253871       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 18:22:28.200696       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [21f4069d97199f4e2958fb830b51fdc09989801b58bc975d0867e4dbc7fe5801] <==
	I1018 18:21:45.066408       1 server_linux.go:53] "Using iptables proxy"
	I1018 18:21:45.175059       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 18:21:45.277080       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 18:21:45.283847       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 18:21:45.284669       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 18:21:45.495577       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 18:21:45.495701       1 server_linux.go:132] "Using iptables Proxier"
	I1018 18:21:45.506592       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 18:21:45.507001       1 server.go:527] "Version info" version="v1.34.1"
	I1018 18:21:45.507239       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:21:45.513923       1 config.go:200] "Starting service config controller"
	I1018 18:21:45.514011       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 18:21:45.514059       1 config.go:106] "Starting endpoint slice config controller"
	I1018 18:21:45.514086       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 18:21:45.514134       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 18:21:45.514163       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 18:21:45.517572       1 config.go:309] "Starting node config controller"
	I1018 18:21:45.517646       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 18:21:45.517699       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 18:21:45.614193       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 18:21:45.614255       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 18:21:45.614320       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5dc814f6f6169ebce45ee16a2ee6b9078e6c37b251c85d09d5620df746b60c34] <==
	E1018 18:21:36.500135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 18:21:36.500187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 18:21:36.500254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 18:21:36.500304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 18:21:36.500464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 18:21:36.506985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 18:21:36.507090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 18:21:36.507162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 18:21:36.507232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 18:21:36.507296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 18:21:36.507357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 18:21:36.507419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 18:21:36.507484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 18:21:36.507548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 18:21:36.507653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 18:21:36.507750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 18:21:37.333993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 18:21:37.350153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 18:21:37.360679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 18:21:37.447022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 18:21:37.470337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 18:21:37.513845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 18:21:37.531491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 18:21:37.980339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1018 18:21:39.848180       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 18:21:40 default-k8s-diff-port-192562 kubelet[1309]: E1018 18:21:40.767313    1309 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-192562\" already exists" pod="kube-system/etcd-default-k8s-diff-port-192562"
	Oct 18 18:21:40 default-k8s-diff-port-192562 kubelet[1309]: I1018 18:21:40.836724    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-192562" podStartSLOduration=1.836705894 podStartE2EDuration="1.836705894s" podCreationTimestamp="2025-10-18 18:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 18:21:40.746755427 +0000 UTC m=+1.363501038" watchObservedRunningTime="2025-10-18 18:21:40.836705894 +0000 UTC m=+1.453451505"
	Oct 18 18:21:40 default-k8s-diff-port-192562 kubelet[1309]: I1018 18:21:40.964228    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-192562" podStartSLOduration=1.96420784 podStartE2EDuration="1.96420784s" podCreationTimestamp="2025-10-18 18:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 18:21:40.913899849 +0000 UTC m=+1.530645461" watchObservedRunningTime="2025-10-18 18:21:40.96420784 +0000 UTC m=+1.580953452"
	Oct 18 18:21:43 default-k8s-diff-port-192562 kubelet[1309]: I1018 18:21:43.225795    1309 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 18:21:43 default-k8s-diff-port-192562 kubelet[1309]: I1018 18:21:43.226446    1309 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 18:21:44 default-k8s-diff-port-192562 kubelet[1309]: I1018 18:21:44.310553    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/960d4013-a055-439a-8a17-791353ced8cb-xtables-lock\") pod \"kindnet-6vrvc\" (UID: \"960d4013-a055-439a-8a17-791353ced8cb\") " pod="kube-system/kindnet-6vrvc"
	Oct 18 18:21:44 default-k8s-diff-port-192562 kubelet[1309]: I1018 18:21:44.310596    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56012359-4b0e-4ac5-b6c5-816c0b0c7063-lib-modules\") pod \"kube-proxy-c7jft\" (UID: \"56012359-4b0e-4ac5-b6c5-816c0b0c7063\") " pod="kube-system/kube-proxy-c7jft"
	Oct 18 18:21:44 default-k8s-diff-port-192562 kubelet[1309]: I1018 18:21:44.310674    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/960d4013-a055-439a-8a17-791353ced8cb-lib-modules\") pod \"kindnet-6vrvc\" (UID: \"960d4013-a055-439a-8a17-791353ced8cb\") " pod="kube-system/kindnet-6vrvc"
	Oct 18 18:21:44 default-k8s-diff-port-192562 kubelet[1309]: I1018 18:21:44.310734    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9ccr\" (UniqueName: \"kubernetes.io/projected/960d4013-a055-439a-8a17-791353ced8cb-kube-api-access-g9ccr\") pod \"kindnet-6vrvc\" (UID: \"960d4013-a055-439a-8a17-791353ced8cb\") " pod="kube-system/kindnet-6vrvc"
	Oct 18 18:21:44 default-k8s-diff-port-192562 kubelet[1309]: I1018 18:21:44.310753    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56012359-4b0e-4ac5-b6c5-816c0b0c7063-xtables-lock\") pod \"kube-proxy-c7jft\" (UID: \"56012359-4b0e-4ac5-b6c5-816c0b0c7063\") " pod="kube-system/kube-proxy-c7jft"
	Oct 18 18:21:44 default-k8s-diff-port-192562 kubelet[1309]: I1018 18:21:44.310818    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpp2l\" (UniqueName: \"kubernetes.io/projected/56012359-4b0e-4ac5-b6c5-816c0b0c7063-kube-api-access-gpp2l\") pod \"kube-proxy-c7jft\" (UID: \"56012359-4b0e-4ac5-b6c5-816c0b0c7063\") " pod="kube-system/kube-proxy-c7jft"
	Oct 18 18:21:44 default-k8s-diff-port-192562 kubelet[1309]: I1018 18:21:44.310866    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/56012359-4b0e-4ac5-b6c5-816c0b0c7063-kube-proxy\") pod \"kube-proxy-c7jft\" (UID: \"56012359-4b0e-4ac5-b6c5-816c0b0c7063\") " pod="kube-system/kube-proxy-c7jft"
	Oct 18 18:21:44 default-k8s-diff-port-192562 kubelet[1309]: I1018 18:21:44.310889    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/960d4013-a055-439a-8a17-791353ced8cb-cni-cfg\") pod \"kindnet-6vrvc\" (UID: \"960d4013-a055-439a-8a17-791353ced8cb\") " pod="kube-system/kindnet-6vrvc"
	Oct 18 18:21:44 default-k8s-diff-port-192562 kubelet[1309]: I1018 18:21:44.452808    1309 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 18:21:45 default-k8s-diff-port-192562 kubelet[1309]: I1018 18:21:45.741396    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c7jft" podStartSLOduration=1.741376823 podStartE2EDuration="1.741376823s" podCreationTimestamp="2025-10-18 18:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 18:21:45.741103228 +0000 UTC m=+6.357848848" watchObservedRunningTime="2025-10-18 18:21:45.741376823 +0000 UTC m=+6.358122443"
	Oct 18 18:21:45 default-k8s-diff-port-192562 kubelet[1309]: I1018 18:21:45.741508    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-6vrvc" podStartSLOduration=1.7415014850000001 podStartE2EDuration="1.741501485s" podCreationTimestamp="2025-10-18 18:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 18:21:45.695268016 +0000 UTC m=+6.312013652" watchObservedRunningTime="2025-10-18 18:21:45.741501485 +0000 UTC m=+6.358247105"
	Oct 18 18:22:25 default-k8s-diff-port-192562 kubelet[1309]: I1018 18:22:25.446282    1309 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 18:22:25 default-k8s-diff-port-192562 kubelet[1309]: I1018 18:22:25.528464    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjc8l\" (UniqueName: \"kubernetes.io/projected/ee029b71-681a-4716-ad5f-c699bd315801-kube-api-access-cjc8l\") pod \"storage-provisioner\" (UID: \"ee029b71-681a-4716-ad5f-c699bd315801\") " pod="kube-system/storage-provisioner"
	Oct 18 18:22:25 default-k8s-diff-port-192562 kubelet[1309]: I1018 18:22:25.528531    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9chpq\" (UniqueName: \"kubernetes.io/projected/31c59339-b043-4ff8-858f-bd618113dfa3-kube-api-access-9chpq\") pod \"coredns-66bc5c9577-psj29\" (UID: \"31c59339-b043-4ff8-858f-bd618113dfa3\") " pod="kube-system/coredns-66bc5c9577-psj29"
	Oct 18 18:22:25 default-k8s-diff-port-192562 kubelet[1309]: I1018 18:22:25.528552    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ee029b71-681a-4716-ad5f-c699bd315801-tmp\") pod \"storage-provisioner\" (UID: \"ee029b71-681a-4716-ad5f-c699bd315801\") " pod="kube-system/storage-provisioner"
	Oct 18 18:22:25 default-k8s-diff-port-192562 kubelet[1309]: I1018 18:22:25.528573    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31c59339-b043-4ff8-858f-bd618113dfa3-config-volume\") pod \"coredns-66bc5c9577-psj29\" (UID: \"31c59339-b043-4ff8-858f-bd618113dfa3\") " pod="kube-system/coredns-66bc5c9577-psj29"
	Oct 18 18:22:25 default-k8s-diff-port-192562 kubelet[1309]: W1018 18:22:25.810515    1309 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c0a8933c552c9d4e5fb4ca01ca33c573463079ebfb6960b8ac96dc752d5faeaa/crio-fa55256170575d1a5d3e1e36d519e283dabd8b7815c2d991592e3009c5de5bcc WatchSource:0}: Error finding container fa55256170575d1a5d3e1e36d519e283dabd8b7815c2d991592e3009c5de5bcc: Status 404 returned error can't find the container with id fa55256170575d1a5d3e1e36d519e283dabd8b7815c2d991592e3009c5de5bcc
	Oct 18 18:22:26 default-k8s-diff-port-192562 kubelet[1309]: I1018 18:22:26.810027    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-psj29" podStartSLOduration=42.810008001 podStartE2EDuration="42.810008001s" podCreationTimestamp="2025-10-18 18:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 18:22:26.792608139 +0000 UTC m=+47.409353751" watchObservedRunningTime="2025-10-18 18:22:26.810008001 +0000 UTC m=+47.426753613"
	Oct 18 18:22:29 default-k8s-diff-port-192562 kubelet[1309]: I1018 18:22:29.094433    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=44.09441351 podStartE2EDuration="44.09441351s" podCreationTimestamp="2025-10-18 18:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 18:22:26.836765833 +0000 UTC m=+47.453511453" watchObservedRunningTime="2025-10-18 18:22:29.09441351 +0000 UTC m=+49.711159122"
	Oct 18 18:22:29 default-k8s-diff-port-192562 kubelet[1309]: I1018 18:22:29.153467    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt2fg\" (UniqueName: \"kubernetes.io/projected/3e78f628-7e13-41fa-9490-a4c4f9ae21c7-kube-api-access-vt2fg\") pod \"busybox\" (UID: \"3e78f628-7e13-41fa-9490-a4c4f9ae21c7\") " pod="default/busybox"
	
	
	==> storage-provisioner [a6b6e1a78f1c3351f7be8c8bccfa2154259fbbee97115c1d2c4af4ca23ec7924] <==
	I1018 18:22:25.880857       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 18:22:25.899954       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 18:22:25.900010       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 18:22:25.905923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:22:25.926432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 18:22:25.929178       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 18:22:25.929455       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-192562_5f4d5ec7-7c55-4b4f-a7fd-d7107db5b275!
	W1018 18:22:25.932082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 18:22:25.933923       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca1be163-b867-4e80-ab6a-bbe296c21eb5", APIVersion:"v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-192562_5f4d5ec7-7c55-4b4f-a7fd-d7107db5b275 became leader
	W1018 18:22:25.964112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 18:22:26.032822       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-192562_5f4d5ec7-7c55-4b4f-a7fd-d7107db5b275!
	W1018 18:22:27.967562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:22:27.972298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:22:29.975348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:22:29.980137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:22:31.984184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:22:31.989397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:22:33.992574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:22:33.999936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:22:36.008257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:22:36.013741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:22:38.016924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:22:38.025807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:22:40.035102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:22:40.046216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-192562 -n default-k8s-diff-port-192562
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-192562 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-213943 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-213943 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (290.058365ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:23:09Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-213943 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-213943 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-213943 describe deploy/metrics-server -n kube-system: exit status 1 (90.091165ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-213943 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-213943
helpers_test.go:243: (dbg) docker inspect embed-certs-213943:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6",
	        "Created": "2025-10-18T18:21:41.10994787Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 201382,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T18:21:41.177493923Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6/hosts",
	        "LogPath": "/var/lib/docker/containers/f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6/f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6-json.log",
	        "Name": "/embed-certs-213943",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-213943:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-213943",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6",
	                "LowerDir": "/var/lib/docker/overlay2/5ae3bc0eef02b15432a8f6a5068c9db91f9b4ede8c0e696a3d1cf388220bd2a0-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5ae3bc0eef02b15432a8f6a5068c9db91f9b4ede8c0e696a3d1cf388220bd2a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5ae3bc0eef02b15432a8f6a5068c9db91f9b4ede8c0e696a3d1cf388220bd2a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5ae3bc0eef02b15432a8f6a5068c9db91f9b4ede8c0e696a3d1cf388220bd2a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-213943",
	                "Source": "/var/lib/docker/volumes/embed-certs-213943/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-213943",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-213943",
	                "name.minikube.sigs.k8s.io": "embed-certs-213943",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5213178b09cffcff020be7323849383bc5747b62076c5ac7533c61d5265df2ee",
	            "SandboxKey": "/var/run/docker/netns/5213178b09cf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-213943": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:eb:59:97:8a:a3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "efe92dc8c8166df0c3008dadfb93e08ef35b4f9b392d6a8aee91eaee89568b86",
	                    "EndpointID": "e78a8fed937c68349513d793e25faf53ba7218eea8188cb7a4b2b8b54daf2bb7",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-213943",
	                        "f6d884df9095"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-213943 -n embed-certs-213943
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-213943 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-213943 logs -n 25: (1.414125996s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p pause-321903                                                                                                                                                                                                                               │ pause-321903                 │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │ 18 Oct 25 18:17 UTC │
	│ start   │ -p cert-expiration-463770 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-463770       │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │ 18 Oct 25 18:18 UTC │
	│ delete  │ -p force-systemd-env-785999                                                                                                                                                                                                                   │ force-systemd-env-785999     │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │ 18 Oct 25 18:17 UTC │
	│ start   │ -p cert-options-327418 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-327418          │ jenkins │ v1.37.0 │ 18 Oct 25 18:17 UTC │ 18 Oct 25 18:18 UTC │
	│ ssh     │ cert-options-327418 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-327418          │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:18 UTC │
	│ ssh     │ -p cert-options-327418 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-327418          │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:18 UTC │
	│ delete  │ -p cert-options-327418                                                                                                                                                                                                                        │ cert-options-327418          │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:18 UTC │
	│ start   │ -p old-k8s-version-918475 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:19 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-918475 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │                     │
	│ stop    │ -p old-k8s-version-918475 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │ 18 Oct 25 18:19 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-918475 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │ 18 Oct 25 18:19 UTC │
	│ start   │ -p old-k8s-version-918475 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │ 18 Oct 25 18:20 UTC │
	│ image   │ old-k8s-version-918475 image list --format=json                                                                                                                                                                                               │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:20 UTC │ 18 Oct 25 18:20 UTC │
	│ pause   │ -p old-k8s-version-918475 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:20 UTC │                     │
	│ delete  │ -p old-k8s-version-918475                                                                                                                                                                                                                     │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:20 UTC │ 18 Oct 25 18:21 UTC │
	│ start   │ -p cert-expiration-463770 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-463770       │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:21 UTC │
	│ delete  │ -p old-k8s-version-918475                                                                                                                                                                                                                     │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:21 UTC │
	│ start   │ -p default-k8s-diff-port-192562 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:22 UTC │
	│ delete  │ -p cert-expiration-463770                                                                                                                                                                                                                     │ cert-expiration-463770       │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:21 UTC │
	│ start   │ -p embed-certs-213943 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-192562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-192562 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │ 18 Oct 25 18:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-192562 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │ 18 Oct 25 18:22 UTC │
	│ start   │ -p default-k8s-diff-port-192562 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-213943 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 18:22:53
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 18:22:53.086359  204660 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:22:53.086526  204660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:22:53.086538  204660 out.go:374] Setting ErrFile to fd 2...
	I1018 18:22:53.086544  204660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:22:53.086845  204660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:22:53.087232  204660 out.go:368] Setting JSON to false
	I1018 18:22:53.088168  204660 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7522,"bootTime":1760804251,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 18:22:53.088240  204660 start.go:141] virtualization:  
	I1018 18:22:53.093474  204660 out.go:179] * [default-k8s-diff-port-192562] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 18:22:53.096623  204660 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 18:22:53.096670  204660 notify.go:220] Checking for updates...
	I1018 18:22:53.100682  204660 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 18:22:53.103568  204660 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:22:53.106468  204660 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 18:22:53.109382  204660 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 18:22:53.112260  204660 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 18:22:53.115502  204660 config.go:182] Loaded profile config "default-k8s-diff-port-192562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:22:53.116075  204660 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 18:22:53.138280  204660 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 18:22:53.139060  204660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:22:53.202963  204660 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 18:22:53.193732676 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:22:53.203067  204660 docker.go:318] overlay module found
	I1018 18:22:53.206456  204660 out.go:179] * Using the docker driver based on existing profile
	I1018 18:22:53.209239  204660 start.go:305] selected driver: docker
	I1018 18:22:53.209259  204660 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-192562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-192562 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:22:53.209369  204660 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 18:22:53.210068  204660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:22:53.264306  204660 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 18:22:53.253648143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:22:53.264674  204660 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:22:53.264708  204660 cni.go:84] Creating CNI manager for ""
	I1018 18:22:53.264757  204660 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:22:53.264798  204660 start.go:349] cluster config:
	{Name:default-k8s-diff-port-192562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-192562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:22:53.268059  204660 out.go:179] * Starting "default-k8s-diff-port-192562" primary control-plane node in "default-k8s-diff-port-192562" cluster
	I1018 18:22:53.270870  204660 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 18:22:53.273809  204660 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 18:22:53.276578  204660 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:22:53.276631  204660 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 18:22:53.276649  204660 cache.go:58] Caching tarball of preloaded images
	I1018 18:22:53.276731  204660 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 18:22:53.276747  204660 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 18:22:53.276875  204660 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/default-k8s-diff-port-192562/config.json ...
	I1018 18:22:53.276998  204660 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 18:22:53.298031  204660 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 18:22:53.298054  204660 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 18:22:53.298067  204660 cache.go:232] Successfully downloaded all kic artifacts
	I1018 18:22:53.298093  204660 start.go:360] acquireMachinesLock for default-k8s-diff-port-192562: {Name:mk20baa0c5cf7cf5c0574a2664cf91a57026bcfa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:22:53.298155  204660 start.go:364] duration metric: took 37.096µs to acquireMachinesLock for "default-k8s-diff-port-192562"
	I1018 18:22:53.298180  204660 start.go:96] Skipping create...Using existing machine configuration
	I1018 18:22:53.298186  204660 fix.go:54] fixHost starting: 
	I1018 18:22:53.298455  204660 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-192562 --format={{.State.Status}}
	I1018 18:22:53.315461  204660 fix.go:112] recreateIfNeeded on default-k8s-diff-port-192562: state=Stopped err=<nil>
	W1018 18:22:53.315495  204660 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 18:22:50.722904  200913 node_ready.go:57] node "embed-certs-213943" has "Ready":"False" status (will retry)
	W1018 18:22:52.723909  200913 node_ready.go:57] node "embed-certs-213943" has "Ready":"False" status (will retry)
	I1018 18:22:53.318694  204660 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-192562" ...
	I1018 18:22:53.318775  204660 cli_runner.go:164] Run: docker start default-k8s-diff-port-192562
	I1018 18:22:53.580450  204660 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-192562 --format={{.State.Status}}
	I1018 18:22:53.603380  204660 kic.go:430] container "default-k8s-diff-port-192562" state is running.
	I1018 18:22:53.603899  204660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-192562
	I1018 18:22:53.629762  204660 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/default-k8s-diff-port-192562/config.json ...
	I1018 18:22:53.630215  204660 machine.go:93] provisionDockerMachine start ...
	I1018 18:22:53.630280  204660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-192562
	I1018 18:22:53.653735  204660 main.go:141] libmachine: Using SSH client type: native
	I1018 18:22:53.654071  204660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1018 18:22:53.654080  204660 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 18:22:53.654992  204660 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 18:22:56.804559  204660 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-192562
	
	I1018 18:22:56.804585  204660 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-192562"
	I1018 18:22:56.804655  204660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-192562
	I1018 18:22:56.822233  204660 main.go:141] libmachine: Using SSH client type: native
	I1018 18:22:56.822562  204660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1018 18:22:56.822580  204660 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-192562 && echo "default-k8s-diff-port-192562" | sudo tee /etc/hostname
	I1018 18:22:56.990787  204660 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-192562
	
	I1018 18:22:56.990879  204660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-192562
	I1018 18:22:57.010261  204660 main.go:141] libmachine: Using SSH client type: native
	I1018 18:22:57.010596  204660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1018 18:22:57.010622  204660 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-192562' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-192562/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-192562' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 18:22:57.157110  204660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 18:22:57.157134  204660 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 18:22:57.157152  204660 ubuntu.go:190] setting up certificates
	I1018 18:22:57.157162  204660 provision.go:84] configureAuth start
	I1018 18:22:57.157219  204660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-192562
	I1018 18:22:57.174223  204660 provision.go:143] copyHostCerts
	I1018 18:22:57.174293  204660 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 18:22:57.174320  204660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 18:22:57.174407  204660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 18:22:57.174507  204660 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 18:22:57.174517  204660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 18:22:57.174544  204660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 18:22:57.174602  204660 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 18:22:57.174612  204660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 18:22:57.174635  204660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 18:22:57.174725  204660 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-192562 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-192562 localhost minikube]
	I1018 18:22:57.804147  204660 provision.go:177] copyRemoteCerts
	I1018 18:22:57.804216  204660 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 18:22:57.804261  204660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-192562
	I1018 18:22:57.821936  204660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/default-k8s-diff-port-192562/id_rsa Username:docker}
	I1018 18:22:57.930966  204660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 18:22:57.951019  204660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 18:22:57.970796  204660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 18:22:57.996611  204660 provision.go:87] duration metric: took 839.434661ms to configureAuth
	I1018 18:22:57.996635  204660 ubuntu.go:206] setting minikube options for container-runtime
	I1018 18:22:57.996844  204660 config.go:182] Loaded profile config "default-k8s-diff-port-192562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:22:57.996972  204660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-192562
	I1018 18:22:58.017915  204660 main.go:141] libmachine: Using SSH client type: native
	I1018 18:22:58.018242  204660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1018 18:22:58.018262  204660 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1018 18:22:55.221907  200913 node_ready.go:57] node "embed-certs-213943" has "Ready":"False" status (will retry)
	I1018 18:22:56.222503  200913 node_ready.go:49] node "embed-certs-213943" is "Ready"
	I1018 18:22:56.222534  200913 node_ready.go:38] duration metric: took 40.503388789s for node "embed-certs-213943" to be "Ready" ...
	I1018 18:22:56.222547  200913 api_server.go:52] waiting for apiserver process to appear ...
	I1018 18:22:56.222613  200913 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 18:22:56.234377  200913 api_server.go:72] duration metric: took 41.770093908s to wait for apiserver process to appear ...
	I1018 18:22:56.234403  200913 api_server.go:88] waiting for apiserver healthz status ...
	I1018 18:22:56.234421  200913 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 18:22:56.244865  200913 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 18:22:56.246047  200913 api_server.go:141] control plane version: v1.34.1
	I1018 18:22:56.246078  200913 api_server.go:131] duration metric: took 11.668145ms to wait for apiserver health ...
	I1018 18:22:56.246088  200913 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 18:22:56.251469  200913 system_pods.go:59] 8 kube-system pods found
	I1018 18:22:56.251524  200913 system_pods.go:61] "coredns-66bc5c9577-grf2z" [0a6125b1-a0eb-4600-9b53-35017d6ee21b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:22:56.251544  200913 system_pods.go:61] "etcd-embed-certs-213943" [8b55657c-393f-48c1-9a5d-6ab96021decb] Running
	I1018 18:22:56.251551  200913 system_pods.go:61] "kindnet-44fc8" [b35c637a-9afc-46ee-93dd-89db133869e9] Running
	I1018 18:22:56.251556  200913 system_pods.go:61] "kube-apiserver-embed-certs-213943" [e615020d-5cc5-4e06-8605-21cfcd9b1750] Running
	I1018 18:22:56.251573  200913 system_pods.go:61] "kube-controller-manager-embed-certs-213943" [01383f1b-63a2-47e1-8946-f987e9bcee73] Running
	I1018 18:22:56.251582  200913 system_pods.go:61] "kube-proxy-gcf8n" [0f81c7f5-8e47-4826-bdb3-867782c394a7] Running
	I1018 18:22:56.251590  200913 system_pods.go:61] "kube-scheduler-embed-certs-213943" [216b830a-b447-408c-a3d1-7233624d11a6] Running
	I1018 18:22:56.251596  200913 system_pods.go:61] "storage-provisioner" [8b4837a6-135d-4719-b80f-0e37d07f3fe4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 18:22:56.251602  200913 system_pods.go:74] duration metric: took 5.502988ms to wait for pod list to return data ...
	I1018 18:22:56.251611  200913 default_sa.go:34] waiting for default service account to be created ...
	I1018 18:22:56.256225  200913 default_sa.go:45] found service account: "default"
	I1018 18:22:56.256252  200913 default_sa.go:55] duration metric: took 4.631264ms for default service account to be created ...
	I1018 18:22:56.256272  200913 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 18:22:56.260182  200913 system_pods.go:86] 8 kube-system pods found
	I1018 18:22:56.260262  200913 system_pods.go:89] "coredns-66bc5c9577-grf2z" [0a6125b1-a0eb-4600-9b53-35017d6ee21b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:22:56.260283  200913 system_pods.go:89] "etcd-embed-certs-213943" [8b55657c-393f-48c1-9a5d-6ab96021decb] Running
	I1018 18:22:56.260302  200913 system_pods.go:89] "kindnet-44fc8" [b35c637a-9afc-46ee-93dd-89db133869e9] Running
	I1018 18:22:56.260334  200913 system_pods.go:89] "kube-apiserver-embed-certs-213943" [e615020d-5cc5-4e06-8605-21cfcd9b1750] Running
	I1018 18:22:56.260357  200913 system_pods.go:89] "kube-controller-manager-embed-certs-213943" [01383f1b-63a2-47e1-8946-f987e9bcee73] Running
	I1018 18:22:56.260375  200913 system_pods.go:89] "kube-proxy-gcf8n" [0f81c7f5-8e47-4826-bdb3-867782c394a7] Running
	I1018 18:22:56.260416  200913 system_pods.go:89] "kube-scheduler-embed-certs-213943" [216b830a-b447-408c-a3d1-7233624d11a6] Running
	I1018 18:22:56.260440  200913 system_pods.go:89] "storage-provisioner" [8b4837a6-135d-4719-b80f-0e37d07f3fe4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 18:22:56.260473  200913 retry.go:31] will retry after 201.569083ms: missing components: kube-dns
	I1018 18:22:56.471948  200913 system_pods.go:86] 8 kube-system pods found
	I1018 18:22:56.472030  200913 system_pods.go:89] "coredns-66bc5c9577-grf2z" [0a6125b1-a0eb-4600-9b53-35017d6ee21b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:22:56.472060  200913 system_pods.go:89] "etcd-embed-certs-213943" [8b55657c-393f-48c1-9a5d-6ab96021decb] Running
	I1018 18:22:56.472096  200913 system_pods.go:89] "kindnet-44fc8" [b35c637a-9afc-46ee-93dd-89db133869e9] Running
	I1018 18:22:56.472116  200913 system_pods.go:89] "kube-apiserver-embed-certs-213943" [e615020d-5cc5-4e06-8605-21cfcd9b1750] Running
	I1018 18:22:56.472136  200913 system_pods.go:89] "kube-controller-manager-embed-certs-213943" [01383f1b-63a2-47e1-8946-f987e9bcee73] Running
	I1018 18:22:56.472154  200913 system_pods.go:89] "kube-proxy-gcf8n" [0f81c7f5-8e47-4826-bdb3-867782c394a7] Running
	I1018 18:22:56.472184  200913 system_pods.go:89] "kube-scheduler-embed-certs-213943" [216b830a-b447-408c-a3d1-7233624d11a6] Running
	I1018 18:22:56.472208  200913 system_pods.go:89] "storage-provisioner" [8b4837a6-135d-4719-b80f-0e37d07f3fe4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 18:22:56.472235  200913 retry.go:31] will retry after 387.7845ms: missing components: kube-dns
	I1018 18:22:56.864000  200913 system_pods.go:86] 8 kube-system pods found
	I1018 18:22:56.864036  200913 system_pods.go:89] "coredns-66bc5c9577-grf2z" [0a6125b1-a0eb-4600-9b53-35017d6ee21b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:22:56.864044  200913 system_pods.go:89] "etcd-embed-certs-213943" [8b55657c-393f-48c1-9a5d-6ab96021decb] Running
	I1018 18:22:56.864050  200913 system_pods.go:89] "kindnet-44fc8" [b35c637a-9afc-46ee-93dd-89db133869e9] Running
	I1018 18:22:56.864054  200913 system_pods.go:89] "kube-apiserver-embed-certs-213943" [e615020d-5cc5-4e06-8605-21cfcd9b1750] Running
	I1018 18:22:56.864059  200913 system_pods.go:89] "kube-controller-manager-embed-certs-213943" [01383f1b-63a2-47e1-8946-f987e9bcee73] Running
	I1018 18:22:56.864064  200913 system_pods.go:89] "kube-proxy-gcf8n" [0f81c7f5-8e47-4826-bdb3-867782c394a7] Running
	I1018 18:22:56.864067  200913 system_pods.go:89] "kube-scheduler-embed-certs-213943" [216b830a-b447-408c-a3d1-7233624d11a6] Running
	I1018 18:22:56.864072  200913 system_pods.go:89] "storage-provisioner" [8b4837a6-135d-4719-b80f-0e37d07f3fe4] Running
	I1018 18:22:56.864080  200913 system_pods.go:126] duration metric: took 607.801752ms to wait for k8s-apps to be running ...
	I1018 18:22:56.864091  200913 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 18:22:56.864149  200913 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:22:56.878694  200913 system_svc.go:56] duration metric: took 14.594246ms WaitForService to wait for kubelet
	I1018 18:22:56.878719  200913 kubeadm.go:586] duration metric: took 42.414439487s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:22:56.878749  200913 node_conditions.go:102] verifying NodePressure condition ...
	I1018 18:22:56.883720  200913 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 18:22:56.883799  200913 node_conditions.go:123] node cpu capacity is 2
	I1018 18:22:56.883826  200913 node_conditions.go:105] duration metric: took 5.070645ms to run NodePressure ...
	I1018 18:22:56.883851  200913 start.go:241] waiting for startup goroutines ...
	I1018 18:22:56.883872  200913 start.go:246] waiting for cluster config update ...
	I1018 18:22:56.883903  200913 start.go:255] writing updated cluster config ...
	I1018 18:22:56.884201  200913 ssh_runner.go:195] Run: rm -f paused
	I1018 18:22:56.888843  200913 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:22:56.893187  200913 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-grf2z" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:22:57.899264  200913 pod_ready.go:94] pod "coredns-66bc5c9577-grf2z" is "Ready"
	I1018 18:22:57.899292  200913 pod_ready.go:86] duration metric: took 1.006083295s for pod "coredns-66bc5c9577-grf2z" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:22:57.902224  200913 pod_ready.go:83] waiting for pod "etcd-embed-certs-213943" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:22:57.906899  200913 pod_ready.go:94] pod "etcd-embed-certs-213943" is "Ready"
	I1018 18:22:57.906924  200913 pod_ready.go:86] duration metric: took 4.669943ms for pod "etcd-embed-certs-213943" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:22:57.909548  200913 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-213943" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:22:57.914541  200913 pod_ready.go:94] pod "kube-apiserver-embed-certs-213943" is "Ready"
	I1018 18:22:57.914574  200913 pod_ready.go:86] duration metric: took 4.999358ms for pod "kube-apiserver-embed-certs-213943" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:22:57.917666  200913 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-213943" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:22:58.097739  200913 pod_ready.go:94] pod "kube-controller-manager-embed-certs-213943" is "Ready"
	I1018 18:22:58.097782  200913 pod_ready.go:86] duration metric: took 180.062328ms for pod "kube-controller-manager-embed-certs-213943" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:22:58.298657  200913 pod_ready.go:83] waiting for pod "kube-proxy-gcf8n" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:22:58.697038  200913 pod_ready.go:94] pod "kube-proxy-gcf8n" is "Ready"
	I1018 18:22:58.697067  200913 pod_ready.go:86] duration metric: took 398.38237ms for pod "kube-proxy-gcf8n" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:22:58.896994  200913 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-213943" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:22:59.297444  200913 pod_ready.go:94] pod "kube-scheduler-embed-certs-213943" is "Ready"
	I1018 18:22:59.297476  200913 pod_ready.go:86] duration metric: took 400.454462ms for pod "kube-scheduler-embed-certs-213943" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:22:59.297488  200913 pod_ready.go:40] duration metric: took 2.408619566s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:22:59.391796  200913 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 18:22:59.397156  200913 out.go:179] * Done! kubectl is now configured to use "embed-certs-213943" cluster and "default" namespace by default
	I1018 18:22:58.345057  204660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 18:22:58.345083  204660 machine.go:96] duration metric: took 4.714854399s to provisionDockerMachine
	I1018 18:22:58.345094  204660 start.go:293] postStartSetup for "default-k8s-diff-port-192562" (driver="docker")
	I1018 18:22:58.345106  204660 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 18:22:58.345164  204660 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 18:22:58.345230  204660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-192562
	I1018 18:22:58.376474  204660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/default-k8s-diff-port-192562/id_rsa Username:docker}
	I1018 18:22:58.481525  204660 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 18:22:58.484974  204660 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 18:22:58.485005  204660 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 18:22:58.485023  204660 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 18:22:58.485077  204660 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 18:22:58.485155  204660 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 18:22:58.485264  204660 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 18:22:58.493194  204660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:22:58.512689  204660 start.go:296] duration metric: took 167.579912ms for postStartSetup
	I1018 18:22:58.512782  204660 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 18:22:58.512820  204660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-192562
	I1018 18:22:58.530643  204660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/default-k8s-diff-port-192562/id_rsa Username:docker}
	I1018 18:22:58.629806  204660 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 18:22:58.634353  204660 fix.go:56] duration metric: took 5.336160749s for fixHost
	I1018 18:22:58.634388  204660 start.go:83] releasing machines lock for "default-k8s-diff-port-192562", held for 5.336209562s
	I1018 18:22:58.634461  204660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-192562
	I1018 18:22:58.650640  204660 ssh_runner.go:195] Run: cat /version.json
	I1018 18:22:58.650702  204660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-192562
	I1018 18:22:58.650948  204660 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 18:22:58.651005  204660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-192562
	I1018 18:22:58.667267  204660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/default-k8s-diff-port-192562/id_rsa Username:docker}
	I1018 18:22:58.668880  204660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/default-k8s-diff-port-192562/id_rsa Username:docker}
	I1018 18:22:58.768695  204660 ssh_runner.go:195] Run: systemctl --version
	I1018 18:22:58.862981  204660 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 18:22:58.904284  204660 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 18:22:58.908698  204660 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 18:22:58.908797  204660 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 18:22:58.917054  204660 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 18:22:58.917078  204660 start.go:495] detecting cgroup driver to use...
	I1018 18:22:58.917109  204660 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 18:22:58.917155  204660 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 18:22:58.932771  204660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 18:22:58.946636  204660 docker.go:218] disabling cri-docker service (if available) ...
	I1018 18:22:58.946705  204660 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 18:22:58.962066  204660 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 18:22:58.976111  204660 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 18:22:59.098478  204660 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 18:22:59.220995  204660 docker.go:234] disabling docker service ...
	I1018 18:22:59.221115  204660 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 18:22:59.236237  204660 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 18:22:59.249535  204660 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 18:22:59.403409  204660 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 18:22:59.628233  204660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 18:22:59.649157  204660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 18:22:59.668696  204660 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 18:22:59.668754  204660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:22:59.681473  204660 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 18:22:59.681539  204660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:22:59.692109  204660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:22:59.701990  204660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:22:59.711627  204660 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 18:22:59.720774  204660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:22:59.731034  204660 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:22:59.739649  204660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:22:59.750391  204660 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 18:22:59.758424  204660 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 18:22:59.766015  204660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:22:59.890223  204660 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 18:23:00.168717  204660 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 18:23:00.168887  204660 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 18:23:00.207517  204660 start.go:563] Will wait 60s for crictl version
	I1018 18:23:00.207634  204660 ssh_runner.go:195] Run: which crictl
	I1018 18:23:00.213966  204660 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 18:23:00.286653  204660 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 18:23:00.286826  204660 ssh_runner.go:195] Run: crio --version
	I1018 18:23:00.322677  204660 ssh_runner.go:195] Run: crio --version
	I1018 18:23:00.379374  204660 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 18:23:00.382369  204660 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-192562 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:23:00.402584  204660 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 18:23:00.407215  204660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:23:00.419147  204660 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-192562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-192562 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 18:23:00.419290  204660 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:23:00.419352  204660 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:23:00.464511  204660 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:23:00.464541  204660 crio.go:433] Images already preloaded, skipping extraction
	I1018 18:23:00.464606  204660 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:23:00.494507  204660 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:23:00.494529  204660 cache_images.go:85] Images are preloaded, skipping loading
	I1018 18:23:00.494536  204660 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1018 18:23:00.494642  204660 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-192562 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-192562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 18:23:00.494751  204660 ssh_runner.go:195] Run: crio config
	I1018 18:23:00.570829  204660 cni.go:84] Creating CNI manager for ""
	I1018 18:23:00.570853  204660 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:23:00.570895  204660 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 18:23:00.570925  204660 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-192562 NodeName:default-k8s-diff-port-192562 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 18:23:00.571094  204660 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-192562"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 18:23:00.571179  204660 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 18:23:00.579144  204660 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 18:23:00.579211  204660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 18:23:00.586785  204660 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1018 18:23:00.599619  204660 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 18:23:00.612834  204660 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1018 18:23:00.626080  204660 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 18:23:00.629923  204660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:23:00.640634  204660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:23:00.755290  204660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:23:00.781518  204660 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/default-k8s-diff-port-192562 for IP: 192.168.76.2
	I1018 18:23:00.781593  204660 certs.go:195] generating shared ca certs ...
	I1018 18:23:00.781624  204660 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:23:00.781823  204660 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 18:23:00.781899  204660 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 18:23:00.781935  204660 certs.go:257] generating profile certs ...
	I1018 18:23:00.782058  204660 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/default-k8s-diff-port-192562/client.key
	I1018 18:23:00.782146  204660 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/default-k8s-diff-port-192562/apiserver.key.59f65c97
	I1018 18:23:00.782232  204660 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/default-k8s-diff-port-192562/proxy-client.key
	I1018 18:23:00.782396  204660 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 18:23:00.782451  204660 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 18:23:00.782474  204660 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 18:23:00.782529  204660 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 18:23:00.782571  204660 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 18:23:00.782630  204660 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 18:23:00.782701  204660 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:23:00.783335  204660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 18:23:00.807212  204660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 18:23:00.826460  204660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 18:23:00.846445  204660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 18:23:00.876611  204660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/default-k8s-diff-port-192562/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 18:23:00.897831  204660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/default-k8s-diff-port-192562/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 18:23:00.922252  204660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/default-k8s-diff-port-192562/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 18:23:00.951716  204660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/default-k8s-diff-port-192562/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 18:23:00.984926  204660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 18:23:01.011902  204660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 18:23:01.033576  204660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 18:23:01.054572  204660 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 18:23:01.070006  204660 ssh_runner.go:195] Run: openssl version
	I1018 18:23:01.076692  204660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 18:23:01.086986  204660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:23:01.091198  204660 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:23:01.091307  204660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:23:01.133777  204660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 18:23:01.142769  204660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 18:23:01.151745  204660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 18:23:01.156188  204660 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 18:23:01.156304  204660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 18:23:01.197873  204660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 18:23:01.205968  204660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 18:23:01.215292  204660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 18:23:01.219365  204660 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 18:23:01.219483  204660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 18:23:01.261482  204660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 18:23:01.270925  204660 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 18:23:01.274948  204660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 18:23:01.316701  204660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 18:23:01.358518  204660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 18:23:01.400660  204660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 18:23:01.445819  204660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 18:23:01.491935  204660 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 18:23:01.557546  204660 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-192562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-192562 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:23:01.557677  204660 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 18:23:01.557783  204660 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 18:23:01.669858  204660 cri.go:89] found id: "b95b10640928c3fde63ab1dc3d1f20d7b8532a3c6bb09b5b79dd506a1cada9c2"
	I1018 18:23:01.669938  204660 cri.go:89] found id: "01370573ce2751c5a9bc9cf2c5b653ed64758dd3e91f3aec786b7f16d88bf722"
	I1018 18:23:01.669956  204660 cri.go:89] found id: "a40f2fadeda1857088554cfe73930b819e69cca05e8a65552a5d8d7bb7b5946d"
	I1018 18:23:01.669985  204660 cri.go:89] found id: "23e7b4f21a923153503f5d9f363c452579100dd2a260750e3b7a35d6ca8dcb22"
	I1018 18:23:01.670014  204660 cri.go:89] found id: ""
	I1018 18:23:01.670082  204660 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 18:23:01.691606  204660 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:23:01Z" level=error msg="open /run/runc: no such file or directory"
	I1018 18:23:01.691768  204660 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 18:23:01.705684  204660 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 18:23:01.705749  204660 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 18:23:01.705834  204660 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 18:23:01.722511  204660 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 18:23:01.723368  204660 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-192562" does not appear in /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:23:01.723938  204660 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-2509/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-192562" cluster setting kubeconfig missing "default-k8s-diff-port-192562" context setting]
	I1018 18:23:01.724743  204660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:23:01.726595  204660 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 18:23:01.737199  204660 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 18:23:01.737270  204660 kubeadm.go:601] duration metric: took 31.503189ms to restartPrimaryControlPlane
	I1018 18:23:01.737293  204660 kubeadm.go:402] duration metric: took 179.762961ms to StartCluster
	I1018 18:23:01.737319  204660 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:23:01.737411  204660 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:23:01.738942  204660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:23:01.739236  204660 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 18:23:01.739631  204660 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 18:23:01.739700  204660 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-192562"
	I1018 18:23:01.739720  204660 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-192562"
	W1018 18:23:01.739726  204660 addons.go:247] addon storage-provisioner should already be in state true
	I1018 18:23:01.739746  204660 host.go:66] Checking if "default-k8s-diff-port-192562" exists ...
	I1018 18:23:01.740259  204660 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-192562 --format={{.State.Status}}
	I1018 18:23:01.740637  204660 config.go:182] Loaded profile config "default-k8s-diff-port-192562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:23:01.740731  204660 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-192562"
	I1018 18:23:01.740768  204660 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-192562"
	I1018 18:23:01.741116  204660 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-192562 --format={{.State.Status}}
	I1018 18:23:01.741335  204660 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-192562"
	I1018 18:23:01.741369  204660 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-192562"
	W1018 18:23:01.741389  204660 addons.go:247] addon dashboard should already be in state true
	I1018 18:23:01.741463  204660 host.go:66] Checking if "default-k8s-diff-port-192562" exists ...
	I1018 18:23:01.741920  204660 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-192562 --format={{.State.Status}}
	I1018 18:23:01.744605  204660 out.go:179] * Verifying Kubernetes components...
	I1018 18:23:01.751726  204660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:23:01.787729  204660 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 18:23:01.787851  204660 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:23:01.793495  204660 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:23:01.793518  204660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 18:23:01.793585  204660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-192562
	I1018 18:23:01.801049  204660 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 18:23:01.803907  204660 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 18:23:01.803925  204660 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 18:23:01.803978  204660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-192562
	I1018 18:23:01.804304  204660 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-192562"
	W1018 18:23:01.804319  204660 addons.go:247] addon default-storageclass should already be in state true
	I1018 18:23:01.804343  204660 host.go:66] Checking if "default-k8s-diff-port-192562" exists ...
	I1018 18:23:01.804760  204660 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-192562 --format={{.State.Status}}
	I1018 18:23:01.862074  204660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/default-k8s-diff-port-192562/id_rsa Username:docker}
	I1018 18:23:01.876798  204660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/default-k8s-diff-port-192562/id_rsa Username:docker}
	I1018 18:23:01.886702  204660 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 18:23:01.886722  204660 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 18:23:01.886790  204660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-192562
	I1018 18:23:01.917288  204660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/default-k8s-diff-port-192562/id_rsa Username:docker}
	I1018 18:23:02.155089  204660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:23:02.218039  204660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:23:02.222407  204660 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-192562" to be "Ready" ...
	I1018 18:23:02.226372  204660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 18:23:02.339045  204660 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 18:23:02.339069  204660 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 18:23:02.434489  204660 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 18:23:02.434514  204660 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 18:23:02.497738  204660 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 18:23:02.497764  204660 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 18:23:02.529633  204660 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 18:23:02.529658  204660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 18:23:02.554726  204660 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 18:23:02.554750  204660 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 18:23:02.573925  204660 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 18:23:02.573948  204660 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 18:23:02.598479  204660 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 18:23:02.598503  204660 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 18:23:02.622231  204660 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 18:23:02.622255  204660 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 18:23:02.646984  204660 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 18:23:02.647004  204660 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 18:23:02.672412  204660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 18:23:06.401939  204660 node_ready.go:49] node "default-k8s-diff-port-192562" is "Ready"
	I1018 18:23:06.401972  204660 node_ready.go:38] duration metric: took 4.179518607s for node "default-k8s-diff-port-192562" to be "Ready" ...
	I1018 18:23:06.401988  204660 api_server.go:52] waiting for apiserver process to appear ...
	I1018 18:23:06.402043  204660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 18:23:08.338343  204660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.120226531s)
	I1018 18:23:08.338398  204660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.112004314s)
	I1018 18:23:08.338667  204660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.666221089s)
	I1018 18:23:08.338792  204660 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.936731662s)
	I1018 18:23:08.338805  204660 api_server.go:72] duration metric: took 6.599520364s to wait for apiserver process to appear ...
	I1018 18:23:08.338812  204660 api_server.go:88] waiting for apiserver healthz status ...
	I1018 18:23:08.338827  204660 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1018 18:23:08.341554  204660 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-192562 addons enable metrics-server
	
	I1018 18:23:08.363852  204660 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 18:23:08.363880  204660 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 18:23:08.409569  204660 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	
	
	==> CRI-O <==
	Oct 18 18:22:56 embed-certs-213943 crio[837]: time="2025-10-18T18:22:56.366492953Z" level=info msg="Created container f733843b72527c72c17e97bcdc7d96d7aef81000b0e4e21569a8ed28ceb6e8ac: kube-system/coredns-66bc5c9577-grf2z/coredns" id=a18eb54e-4734-402d-be21-f7dcff4882f7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:22:56 embed-certs-213943 crio[837]: time="2025-10-18T18:22:56.367621829Z" level=info msg="Starting container: f733843b72527c72c17e97bcdc7d96d7aef81000b0e4e21569a8ed28ceb6e8ac" id=93d298a3-4131-46e6-991d-4238c8e39f0f name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:22:56 embed-certs-213943 crio[837]: time="2025-10-18T18:22:56.377194424Z" level=info msg="Started container" PID=1751 containerID=f733843b72527c72c17e97bcdc7d96d7aef81000b0e4e21569a8ed28ceb6e8ac description=kube-system/coredns-66bc5c9577-grf2z/coredns id=93d298a3-4131-46e6-991d-4238c8e39f0f name=/runtime.v1.RuntimeService/StartContainer sandboxID=c863f51c32b5fffc9e4bee398095029f5894390fcc1f5d1e7807098a9c6abc61
	Oct 18 18:22:59 embed-certs-213943 crio[837]: time="2025-10-18T18:22:59.955110774Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b98d87c1-4404-42d5-9f7d-1a8139027d83 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 18:22:59 embed-certs-213943 crio[837]: time="2025-10-18T18:22:59.955195854Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:22:59 embed-certs-213943 crio[837]: time="2025-10-18T18:22:59.960571677Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5e5533ce5c17acc77f7fedf491129bddc96d4cef0e71092071b5b8827d074700 UID:adef2fd3-de79-4e18-84a0-fe55d89ee37d NetNS:/var/run/netns/48b7132f-1e0c-48e1-89f0-4c7c7bac0866 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000138ef0}] Aliases:map[]}"
	Oct 18 18:22:59 embed-certs-213943 crio[837]: time="2025-10-18T18:22:59.960609306Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 18:22:59 embed-certs-213943 crio[837]: time="2025-10-18T18:22:59.974555774Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5e5533ce5c17acc77f7fedf491129bddc96d4cef0e71092071b5b8827d074700 UID:adef2fd3-de79-4e18-84a0-fe55d89ee37d NetNS:/var/run/netns/48b7132f-1e0c-48e1-89f0-4c7c7bac0866 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000138ef0}] Aliases:map[]}"
	Oct 18 18:22:59 embed-certs-213943 crio[837]: time="2025-10-18T18:22:59.97470145Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 18:22:59 embed-certs-213943 crio[837]: time="2025-10-18T18:22:59.978501294Z" level=info msg="Ran pod sandbox 5e5533ce5c17acc77f7fedf491129bddc96d4cef0e71092071b5b8827d074700 with infra container: default/busybox/POD" id=b98d87c1-4404-42d5-9f7d-1a8139027d83 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 18:22:59 embed-certs-213943 crio[837]: time="2025-10-18T18:22:59.979698183Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=eadfc467-ba6c-44e5-a66f-c2fede563068 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:22:59 embed-certs-213943 crio[837]: time="2025-10-18T18:22:59.979823567Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=eadfc467-ba6c-44e5-a66f-c2fede563068 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:22:59 embed-certs-213943 crio[837]: time="2025-10-18T18:22:59.979858702Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=eadfc467-ba6c-44e5-a66f-c2fede563068 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:22:59 embed-certs-213943 crio[837]: time="2025-10-18T18:22:59.983160644Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d1b2f0f8-e612-48f9-97e0-f9d40bccf018 name=/runtime.v1.ImageService/PullImage
	Oct 18 18:22:59 embed-certs-213943 crio[837]: time="2025-10-18T18:22:59.986487119Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 18:23:02 embed-certs-213943 crio[837]: time="2025-10-18T18:23:02.267014724Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=d1b2f0f8-e612-48f9-97e0-f9d40bccf018 name=/runtime.v1.ImageService/PullImage
	Oct 18 18:23:02 embed-certs-213943 crio[837]: time="2025-10-18T18:23:02.268097814Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=37faa9b5-8a6e-4a47-b588-1dd9d69a5791 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:23:02 embed-certs-213943 crio[837]: time="2025-10-18T18:23:02.274207587Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c0c21dc4-f537-4470-89c5-7d28cf9375db name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:23:02 embed-certs-213943 crio[837]: time="2025-10-18T18:23:02.282212079Z" level=info msg="Creating container: default/busybox/busybox" id=9ece943a-f1f1-4306-8655-ffff3d5477d7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:23:02 embed-certs-213943 crio[837]: time="2025-10-18T18:23:02.283157224Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:23:02 embed-certs-213943 crio[837]: time="2025-10-18T18:23:02.289150589Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:23:02 embed-certs-213943 crio[837]: time="2025-10-18T18:23:02.289762338Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:23:02 embed-certs-213943 crio[837]: time="2025-10-18T18:23:02.314016516Z" level=info msg="Created container 3b387b07bcc0e1a2702fe1f8d3cde86631aa226afe3c1129bf5ae71379fb8527: default/busybox/busybox" id=9ece943a-f1f1-4306-8655-ffff3d5477d7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:23:02 embed-certs-213943 crio[837]: time="2025-10-18T18:23:02.321413017Z" level=info msg="Starting container: 3b387b07bcc0e1a2702fe1f8d3cde86631aa226afe3c1129bf5ae71379fb8527" id=7522cfc6-ebb3-4401-9a59-d0a70d5811f3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:23:02 embed-certs-213943 crio[837]: time="2025-10-18T18:23:02.326991542Z" level=info msg="Started container" PID=1805 containerID=3b387b07bcc0e1a2702fe1f8d3cde86631aa226afe3c1129bf5ae71379fb8527 description=default/busybox/busybox id=7522cfc6-ebb3-4401-9a59-d0a70d5811f3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5e5533ce5c17acc77f7fedf491129bddc96d4cef0e71092071b5b8827d074700
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	3b387b07bcc0e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   5e5533ce5c17a       busybox                                      default
	f733843b72527       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   c863f51c32b5f       coredns-66bc5c9577-grf2z                     kube-system
	30dc9dd91b869       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   ad3e32e701b49       storage-provisioner                          kube-system
	a1ad9d591a770       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   b9cb8d500c1f5       kube-proxy-gcf8n                             kube-system
	d76012e0c2f59       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   74f84e8a1cc7c       kindnet-44fc8                                kube-system
	33033d667d8e0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   8d424fa1f7423       kube-scheduler-embed-certs-213943            kube-system
	d19f85647a673       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   2ea2b75825de3       kube-controller-manager-embed-certs-213943   kube-system
	58f5ad5641b17       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   6e3f4947b8a5f       kube-apiserver-embed-certs-213943            kube-system
	fdd2ed0dc673a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   1a6d8b6cda0c9       etcd-embed-certs-213943                      kube-system
	
	
	==> coredns [f733843b72527c72c17e97bcdc7d96d7aef81000b0e4e21569a8ed28ceb6e8ac] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42095 - 65520 "HINFO IN 7296806334128101268.2543989091261403565. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023813598s
	
	
	==> describe nodes <==
	Name:               embed-certs-213943
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-213943
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=embed-certs-213943
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T18_22_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 18:22:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-213943
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 18:23:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 18:22:55 +0000   Sat, 18 Oct 2025 18:22:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 18:22:55 +0000   Sat, 18 Oct 2025 18:22:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 18:22:55 +0000   Sat, 18 Oct 2025 18:22:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 18:22:55 +0000   Sat, 18 Oct 2025 18:22:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-213943
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                af083a40-edc0-4386-b2b1-7b1c8d51d4fc
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-grf2z                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-213943                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-44fc8                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-embed-certs-213943             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-embed-certs-213943    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-gcf8n                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-embed-certs-213943             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   Starting                 69s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 69s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  68s (x8 over 68s)  kubelet          Node embed-certs-213943 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    68s (x8 over 68s)  kubelet          Node embed-certs-213943 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     68s (x8 over 68s)  kubelet          Node embed-certs-213943 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node embed-certs-213943 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node embed-certs-213943 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node embed-certs-213943 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-213943 event: Registered Node embed-certs-213943 in Controller
	  Normal   NodeReady                15s                kubelet          Node embed-certs-213943 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 18:00] overlayfs: idmapped layers are currently not supported
	[Oct18 18:01] overlayfs: idmapped layers are currently not supported
	[Oct18 18:02] overlayfs: idmapped layers are currently not supported
	[Oct18 18:04] overlayfs: idmapped layers are currently not supported
	[ +24.403909] overlayfs: idmapped layers are currently not supported
	[  +6.162774] overlayfs: idmapped layers are currently not supported
	[Oct18 18:05] overlayfs: idmapped layers are currently not supported
	[ +25.128760] overlayfs: idmapped layers are currently not supported
	[Oct18 18:06] overlayfs: idmapped layers are currently not supported
	[Oct18 18:07] overlayfs: idmapped layers are currently not supported
	[Oct18 18:08] overlayfs: idmapped layers are currently not supported
	[Oct18 18:09] overlayfs: idmapped layers are currently not supported
	[Oct18 18:11] overlayfs: idmapped layers are currently not supported
	[Oct18 18:13] overlayfs: idmapped layers are currently not supported
	[ +30.969240] overlayfs: idmapped layers are currently not supported
	[Oct18 18:15] overlayfs: idmapped layers are currently not supported
	[Oct18 18:16] overlayfs: idmapped layers are currently not supported
	[Oct18 18:17] overlayfs: idmapped layers are currently not supported
	[ +23.167826] overlayfs: idmapped layers are currently not supported
	[Oct18 18:18] overlayfs: idmapped layers are currently not supported
	[ +38.509809] overlayfs: idmapped layers are currently not supported
	[Oct18 18:19] overlayfs: idmapped layers are currently not supported
	[Oct18 18:21] overlayfs: idmapped layers are currently not supported
	[Oct18 18:22] overlayfs: idmapped layers are currently not supported
	[Oct18 18:23] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [fdd2ed0dc673a07604dab4b67ddc8376dfeb7f6d0dcdf12105f4efedb016398b] <==
	{"level":"warn","ts":"2025-10-18T18:22:05.213631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:22:05.223251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:22:05.244521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:22:05.284294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:22:05.302875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:22:05.321920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:22:05.340804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:22:05.356720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:22:05.375055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:22:05.397735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:22:05.425815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:22:05.445663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:22:05.464381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:22:05.485522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:22:05.511242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:22:05.531407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:22:05.548701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:22:05.568896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:22:05.584386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:22:05.602596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:22:05.625809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:22:05.655146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:22:05.685157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:22:05.707152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:22:05.812798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50814","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:23:10 up  2:05,  0 user,  load average: 3.13, 3.06, 2.76
	Linux embed-certs-213943 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d76012e0c2f59b2f57e97dfeb90026e2f842016ccc2f2b5665335089124ae249] <==
	I1018 18:22:15.409537       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 18:22:15.409937       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 18:22:15.410086       1 main.go:148] setting mtu 1500 for CNI 
	I1018 18:22:15.410106       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 18:22:15.410117       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T18:22:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 18:22:15.608342       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 18:22:15.608392       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 18:22:15.608425       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 18:22:15.609509       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 18:22:45.609057       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 18:22:45.609068       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 18:22:45.609175       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 18:22:45.610460       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 18:22:46.909436       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 18:22:46.909506       1 metrics.go:72] Registering metrics
	I1018 18:22:46.909565       1 controller.go:711] "Syncing nftables rules"
	I1018 18:22:55.610713       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 18:22:55.610751       1 main.go:301] handling current node
	I1018 18:23:05.608991       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 18:23:05.609130       1 main.go:301] handling current node
	
	
	==> kube-apiserver [58f5ad5641b176fb37aa6a530953cf712c3f02dee63f10c956d91c70ace60402] <==
	I1018 18:22:06.684115       1 controller.go:667] quota admission added evaluator for: namespaces
	E1018 18:22:06.705114       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1018 18:22:06.747729       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 18:22:06.747856       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 18:22:06.804519       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 18:22:06.811091       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 18:22:06.911621       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 18:22:07.438316       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 18:22:07.443643       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 18:22:07.443667       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 18:22:08.281553       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 18:22:08.336656       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 18:22:08.408384       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 18:22:08.441747       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1018 18:22:08.442939       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 18:22:08.450917       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 18:22:08.555605       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 18:22:09.241591       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 18:22:09.267792       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 18:22:09.278803       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 18:22:13.559626       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 18:22:14.745482       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1018 18:22:14.831451       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 18:22:14.855569       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1018 18:23:08.788841       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:36394: use of closed network connection
	
	
	==> kube-controller-manager [d19f85647a6736f186fbfe0105e9d4346d27738f27fc8618af2319be01a22cef] <==
	I1018 18:22:13.572244       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 18:22:13.575174       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-213943" podCIDRs=["10.244.0.0/24"]
	I1018 18:22:13.599272       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 18:22:13.602758       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 18:22:13.602846       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 18:22:13.602881       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 18:22:13.603244       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 18:22:13.603546       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-213943"
	I1018 18:22:13.603595       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 18:22:13.603736       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 18:22:13.603867       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 18:22:13.604753       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 18:22:13.604828       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 18:22:13.605896       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 18:22:13.606029       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 18:22:13.606088       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 18:22:13.607180       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 18:22:13.607441       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 18:22:13.608502       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 18:22:13.610856       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 18:22:13.612369       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 18:22:13.618522       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 18:22:13.627821       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 18:22:14.717420       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I1018 18:22:58.618705       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a1ad9d591a7708facc8fc10f7cda045741925884f74cb72499ee1df380da39df] <==
	I1018 18:22:15.395066       1 server_linux.go:53] "Using iptables proxy"
	I1018 18:22:15.502770       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 18:22:15.603393       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 18:22:15.603427       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 18:22:15.603498       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 18:22:15.646879       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 18:22:15.646935       1 server_linux.go:132] "Using iptables Proxier"
	I1018 18:22:15.653807       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 18:22:15.654315       1 server.go:527] "Version info" version="v1.34.1"
	I1018 18:22:15.654340       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:22:15.663251       1 config.go:200] "Starting service config controller"
	I1018 18:22:15.663539       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 18:22:15.663580       1 config.go:106] "Starting endpoint slice config controller"
	I1018 18:22:15.663585       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 18:22:15.663596       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 18:22:15.663609       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 18:22:15.666205       1 config.go:309] "Starting node config controller"
	I1018 18:22:15.666282       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 18:22:15.666327       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 18:22:15.763702       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 18:22:15.763744       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 18:22:15.763711       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [33033d667d8e0f7dfd9283a0469ed9af4b99f4e52c52e7fe2df886405c7815eb] <==
	E1018 18:22:06.738390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 18:22:06.738483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 18:22:06.759702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 18:22:06.761857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 18:22:06.762102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 18:22:06.763314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 18:22:06.765147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 18:22:06.765239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 18:22:06.765305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 18:22:06.765405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 18:22:06.765462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 18:22:07.587907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 18:22:07.620370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 18:22:07.658058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 18:22:07.658472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 18:22:07.683163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 18:22:07.690641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 18:22:07.694876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 18:22:07.772912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 18:22:07.788667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 18:22:07.888381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 18:22:07.925592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 18:22:07.933490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 18:22:07.990718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1018 18:22:09.419697       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 18:22:13 embed-certs-213943 kubelet[1311]: I1018 18:22:13.661987    1311 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 18:22:13 embed-certs-213943 kubelet[1311]: I1018 18:22:13.662618    1311 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 18:22:14 embed-certs-213943 kubelet[1311]: I1018 18:22:14.929852    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b35c637a-9afc-46ee-93dd-89db133869e9-cni-cfg\") pod \"kindnet-44fc8\" (UID: \"b35c637a-9afc-46ee-93dd-89db133869e9\") " pod="kube-system/kindnet-44fc8"
	Oct 18 18:22:14 embed-certs-213943 kubelet[1311]: I1018 18:22:14.929898    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b35c637a-9afc-46ee-93dd-89db133869e9-lib-modules\") pod \"kindnet-44fc8\" (UID: \"b35c637a-9afc-46ee-93dd-89db133869e9\") " pod="kube-system/kindnet-44fc8"
	Oct 18 18:22:14 embed-certs-213943 kubelet[1311]: I1018 18:22:14.929917    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h575b\" (UniqueName: \"kubernetes.io/projected/b35c637a-9afc-46ee-93dd-89db133869e9-kube-api-access-h575b\") pod \"kindnet-44fc8\" (UID: \"b35c637a-9afc-46ee-93dd-89db133869e9\") " pod="kube-system/kindnet-44fc8"
	Oct 18 18:22:14 embed-certs-213943 kubelet[1311]: I1018 18:22:14.929939    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0f81c7f5-8e47-4826-bdb3-867782c394a7-kube-proxy\") pod \"kube-proxy-gcf8n\" (UID: \"0f81c7f5-8e47-4826-bdb3-867782c394a7\") " pod="kube-system/kube-proxy-gcf8n"
	Oct 18 18:22:14 embed-certs-213943 kubelet[1311]: I1018 18:22:14.929959    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f81c7f5-8e47-4826-bdb3-867782c394a7-lib-modules\") pod \"kube-proxy-gcf8n\" (UID: \"0f81c7f5-8e47-4826-bdb3-867782c394a7\") " pod="kube-system/kube-proxy-gcf8n"
	Oct 18 18:22:14 embed-certs-213943 kubelet[1311]: I1018 18:22:14.929974    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb7ws\" (UniqueName: \"kubernetes.io/projected/0f81c7f5-8e47-4826-bdb3-867782c394a7-kube-api-access-vb7ws\") pod \"kube-proxy-gcf8n\" (UID: \"0f81c7f5-8e47-4826-bdb3-867782c394a7\") " pod="kube-system/kube-proxy-gcf8n"
	Oct 18 18:22:14 embed-certs-213943 kubelet[1311]: I1018 18:22:14.929990    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b35c637a-9afc-46ee-93dd-89db133869e9-xtables-lock\") pod \"kindnet-44fc8\" (UID: \"b35c637a-9afc-46ee-93dd-89db133869e9\") " pod="kube-system/kindnet-44fc8"
	Oct 18 18:22:14 embed-certs-213943 kubelet[1311]: I1018 18:22:14.930008    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f81c7f5-8e47-4826-bdb3-867782c394a7-xtables-lock\") pod \"kube-proxy-gcf8n\" (UID: \"0f81c7f5-8e47-4826-bdb3-867782c394a7\") " pod="kube-system/kube-proxy-gcf8n"
	Oct 18 18:22:15 embed-certs-213943 kubelet[1311]: I1018 18:22:15.046066    1311 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 18:22:15 embed-certs-213943 kubelet[1311]: W1018 18:22:15.186818    1311 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6/crio-74f84e8a1cc7c92888ca1462d6f4450fbbe22dcad61439f5af5746d193c3f953 WatchSource:0}: Error finding container 74f84e8a1cc7c92888ca1462d6f4450fbbe22dcad61439f5af5746d193c3f953: Status 404 returned error can't find the container with id 74f84e8a1cc7c92888ca1462d6f4450fbbe22dcad61439f5af5746d193c3f953
	Oct 18 18:22:15 embed-certs-213943 kubelet[1311]: I1018 18:22:15.389491    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-44fc8" podStartSLOduration=1.389469627 podStartE2EDuration="1.389469627s" podCreationTimestamp="2025-10-18 18:22:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 18:22:15.358422821 +0000 UTC m=+6.304697023" watchObservedRunningTime="2025-10-18 18:22:15.389469627 +0000 UTC m=+6.335743837"
	Oct 18 18:22:15 embed-certs-213943 kubelet[1311]: I1018 18:22:15.460089    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gcf8n" podStartSLOduration=1.460069332 podStartE2EDuration="1.460069332s" podCreationTimestamp="2025-10-18 18:22:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 18:22:15.389846952 +0000 UTC m=+6.336121162" watchObservedRunningTime="2025-10-18 18:22:15.460069332 +0000 UTC m=+6.406343534"
	Oct 18 18:22:55 embed-certs-213943 kubelet[1311]: I1018 18:22:55.911526    1311 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 18:22:56 embed-certs-213943 kubelet[1311]: I1018 18:22:56.019474    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcmvg\" (UniqueName: \"kubernetes.io/projected/0a6125b1-a0eb-4600-9b53-35017d6ee21b-kube-api-access-qcmvg\") pod \"coredns-66bc5c9577-grf2z\" (UID: \"0a6125b1-a0eb-4600-9b53-35017d6ee21b\") " pod="kube-system/coredns-66bc5c9577-grf2z"
	Oct 18 18:22:56 embed-certs-213943 kubelet[1311]: I1018 18:22:56.019532    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8b4837a6-135d-4719-b80f-0e37d07f3fe4-tmp\") pod \"storage-provisioner\" (UID: \"8b4837a6-135d-4719-b80f-0e37d07f3fe4\") " pod="kube-system/storage-provisioner"
	Oct 18 18:22:56 embed-certs-213943 kubelet[1311]: I1018 18:22:56.019559    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a6125b1-a0eb-4600-9b53-35017d6ee21b-config-volume\") pod \"coredns-66bc5c9577-grf2z\" (UID: \"0a6125b1-a0eb-4600-9b53-35017d6ee21b\") " pod="kube-system/coredns-66bc5c9577-grf2z"
	Oct 18 18:22:56 embed-certs-213943 kubelet[1311]: I1018 18:22:56.019581    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs2hk\" (UniqueName: \"kubernetes.io/projected/8b4837a6-135d-4719-b80f-0e37d07f3fe4-kube-api-access-rs2hk\") pod \"storage-provisioner\" (UID: \"8b4837a6-135d-4719-b80f-0e37d07f3fe4\") " pod="kube-system/storage-provisioner"
	Oct 18 18:22:56 embed-certs-213943 kubelet[1311]: W1018 18:22:56.272306    1311 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6/crio-ad3e32e701b49ac8c644c2c0b8af66f88656296123d2366a951e472da6aea2da WatchSource:0}: Error finding container ad3e32e701b49ac8c644c2c0b8af66f88656296123d2366a951e472da6aea2da: Status 404 returned error can't find the container with id ad3e32e701b49ac8c644c2c0b8af66f88656296123d2366a951e472da6aea2da
	Oct 18 18:22:56 embed-certs-213943 kubelet[1311]: W1018 18:22:56.295766    1311 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6/crio-c863f51c32b5fffc9e4bee398095029f5894390fcc1f5d1e7807098a9c6abc61 WatchSource:0}: Error finding container c863f51c32b5fffc9e4bee398095029f5894390fcc1f5d1e7807098a9c6abc61: Status 404 returned error can't find the container with id c863f51c32b5fffc9e4bee398095029f5894390fcc1f5d1e7807098a9c6abc61
	Oct 18 18:22:56 embed-certs-213943 kubelet[1311]: I1018 18:22:56.495447    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.495422187 podStartE2EDuration="41.495422187s" podCreationTimestamp="2025-10-18 18:22:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 18:22:56.493956979 +0000 UTC m=+47.440231320" watchObservedRunningTime="2025-10-18 18:22:56.495422187 +0000 UTC m=+47.441696397"
	Oct 18 18:22:56 embed-certs-213943 kubelet[1311]: I1018 18:22:56.495913    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-grf2z" podStartSLOduration=42.495901545 podStartE2EDuration="42.495901545s" podCreationTimestamp="2025-10-18 18:22:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 18:22:56.477037796 +0000 UTC m=+47.423312022" watchObservedRunningTime="2025-10-18 18:22:56.495901545 +0000 UTC m=+47.442175755"
	Oct 18 18:22:59 embed-certs-213943 kubelet[1311]: I1018 18:22:59.755283    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sllvc\" (UniqueName: \"kubernetes.io/projected/adef2fd3-de79-4e18-84a0-fe55d89ee37d-kube-api-access-sllvc\") pod \"busybox\" (UID: \"adef2fd3-de79-4e18-84a0-fe55d89ee37d\") " pod="default/busybox"
	Oct 18 18:22:59 embed-certs-213943 kubelet[1311]: W1018 18:22:59.976395    1311 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6/crio-5e5533ce5c17acc77f7fedf491129bddc96d4cef0e71092071b5b8827d074700 WatchSource:0}: Error finding container 5e5533ce5c17acc77f7fedf491129bddc96d4cef0e71092071b5b8827d074700: Status 404 returned error can't find the container with id 5e5533ce5c17acc77f7fedf491129bddc96d4cef0e71092071b5b8827d074700
	
	
	==> storage-provisioner [30dc9dd91b869b6d96696546f79f25ed11d6b6b3beb7efa98ef4b923ea34022c] <==
	I1018 18:22:56.352232       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 18:22:56.399734       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 18:22:56.399862       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 18:22:56.404183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:22:56.414164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 18:22:56.414437       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	W1018 18:22:56.417416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 18:22:56.421400       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-213943_bad287fe-16ca-47e2-ad8e-6201aee2d7ed!
	I1018 18:22:56.422042       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"df0c2d90-b1dc-4b33-97ec-b51fa8382283", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-213943_bad287fe-16ca-47e2-ad8e-6201aee2d7ed became leader
	W1018 18:22:56.450021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 18:22:56.523660       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-213943_bad287fe-16ca-47e2-ad8e-6201aee2d7ed!
	W1018 18:22:58.453080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:22:58.457557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:00.461022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:00.469265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:02.482290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:02.489217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:04.493243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:04.500386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:06.504723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:06.510847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:08.514066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:08.521054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:10.525323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:10.531169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-213943 -n embed-certs-213943
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-213943 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-192562 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-192562 --alsologtostderr -v=1: exit status 80 (2.106829338s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-192562 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 18:24:00.706289  209818 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:24:00.706457  209818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:24:00.706487  209818 out.go:374] Setting ErrFile to fd 2...
	I1018 18:24:00.706506  209818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:24:00.706791  209818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:24:00.707089  209818 out.go:368] Setting JSON to false
	I1018 18:24:00.707141  209818 mustload.go:65] Loading cluster: default-k8s-diff-port-192562
	I1018 18:24:00.707538  209818 config.go:182] Loaded profile config "default-k8s-diff-port-192562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:24:00.708033  209818 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-192562 --format={{.State.Status}}
	I1018 18:24:00.725474  209818 host.go:66] Checking if "default-k8s-diff-port-192562" exists ...
	I1018 18:24:00.725786  209818 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:24:00.787634  209818 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-18 18:24:00.774556089 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:24:00.788394  209818 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-192562 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 18:24:00.791826  209818 out.go:179] * Pausing node default-k8s-diff-port-192562 ... 
	I1018 18:24:00.795460  209818 host.go:66] Checking if "default-k8s-diff-port-192562" exists ...
	I1018 18:24:00.795808  209818 ssh_runner.go:195] Run: systemctl --version
	I1018 18:24:00.795862  209818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-192562
	I1018 18:24:00.815196  209818 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/default-k8s-diff-port-192562/id_rsa Username:docker}
	I1018 18:24:00.919641  209818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:24:00.934101  209818 pause.go:52] kubelet running: true
	I1018 18:24:00.934167  209818 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 18:24:01.232155  209818 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 18:24:01.232251  209818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 18:24:01.315170  209818 cri.go:89] found id: "77450e891d0f89a28fe61fe538b628c44c0a3acdc00441daeaf6962e3dc60913"
	I1018 18:24:01.315194  209818 cri.go:89] found id: "6924b85ba570a30f851ea60cbaf6498eaf85975f8883a16972cc0b614db3ae1a"
	I1018 18:24:01.315200  209818 cri.go:89] found id: "62679bc7de3d96ea22f1ec1fe03d9713354a7daeddddb6d200b8b0232b3a9220"
	I1018 18:24:01.315203  209818 cri.go:89] found id: "79df2c9185f91fe68153976c279dd9aaa7775b92571473ea614f468db51721de"
	I1018 18:24:01.315207  209818 cri.go:89] found id: "54c91171549a4e5775393d4768d527b1d8e22fd30e268495e4d3b6100ec319a5"
	I1018 18:24:01.315211  209818 cri.go:89] found id: "b95b10640928c3fde63ab1dc3d1f20d7b8532a3c6bb09b5b79dd506a1cada9c2"
	I1018 18:24:01.315215  209818 cri.go:89] found id: "01370573ce2751c5a9bc9cf2c5b653ed64758dd3e91f3aec786b7f16d88bf722"
	I1018 18:24:01.315218  209818 cri.go:89] found id: "a40f2fadeda1857088554cfe73930b819e69cca05e8a65552a5d8d7bb7b5946d"
	I1018 18:24:01.315222  209818 cri.go:89] found id: "23e7b4f21a923153503f5d9f363c452579100dd2a260750e3b7a35d6ca8dcb22"
	I1018 18:24:01.315228  209818 cri.go:89] found id: "d724bfd66793bad839089c2ac7e4752e48c341e43c36b8084731177c7fea4183"
	I1018 18:24:01.315232  209818 cri.go:89] found id: "968df3fa8857ba03f182ffe49abd49d62aa437f6426d77631a03400f7324c070"
	I1018 18:24:01.315235  209818 cri.go:89] found id: ""
	I1018 18:24:01.315288  209818 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 18:24:01.336627  209818 retry.go:31] will retry after 339.089093ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:24:01Z" level=error msg="open /run/runc: no such file or directory"
	I1018 18:24:01.676189  209818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:24:01.690162  209818 pause.go:52] kubelet running: false
	I1018 18:24:01.690228  209818 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 18:24:01.874381  209818 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 18:24:01.874470  209818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 18:24:01.953549  209818 cri.go:89] found id: "77450e891d0f89a28fe61fe538b628c44c0a3acdc00441daeaf6962e3dc60913"
	I1018 18:24:01.953575  209818 cri.go:89] found id: "6924b85ba570a30f851ea60cbaf6498eaf85975f8883a16972cc0b614db3ae1a"
	I1018 18:24:01.953579  209818 cri.go:89] found id: "62679bc7de3d96ea22f1ec1fe03d9713354a7daeddddb6d200b8b0232b3a9220"
	I1018 18:24:01.953583  209818 cri.go:89] found id: "79df2c9185f91fe68153976c279dd9aaa7775b92571473ea614f468db51721de"
	I1018 18:24:01.953587  209818 cri.go:89] found id: "54c91171549a4e5775393d4768d527b1d8e22fd30e268495e4d3b6100ec319a5"
	I1018 18:24:01.953591  209818 cri.go:89] found id: "b95b10640928c3fde63ab1dc3d1f20d7b8532a3c6bb09b5b79dd506a1cada9c2"
	I1018 18:24:01.953594  209818 cri.go:89] found id: "01370573ce2751c5a9bc9cf2c5b653ed64758dd3e91f3aec786b7f16d88bf722"
	I1018 18:24:01.953597  209818 cri.go:89] found id: "a40f2fadeda1857088554cfe73930b819e69cca05e8a65552a5d8d7bb7b5946d"
	I1018 18:24:01.953600  209818 cri.go:89] found id: "23e7b4f21a923153503f5d9f363c452579100dd2a260750e3b7a35d6ca8dcb22"
	I1018 18:24:01.953611  209818 cri.go:89] found id: "d724bfd66793bad839089c2ac7e4752e48c341e43c36b8084731177c7fea4183"
	I1018 18:24:01.953615  209818 cri.go:89] found id: "968df3fa8857ba03f182ffe49abd49d62aa437f6426d77631a03400f7324c070"
	I1018 18:24:01.953618  209818 cri.go:89] found id: ""
	I1018 18:24:01.953669  209818 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 18:24:01.965386  209818 retry.go:31] will retry after 501.485056ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:24:01Z" level=error msg="open /run/runc: no such file or directory"
	I1018 18:24:02.467164  209818 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:24:02.481533  209818 pause.go:52] kubelet running: false
	I1018 18:24:02.481640  209818 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 18:24:02.657270  209818 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 18:24:02.657366  209818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 18:24:02.726561  209818 cri.go:89] found id: "77450e891d0f89a28fe61fe538b628c44c0a3acdc00441daeaf6962e3dc60913"
	I1018 18:24:02.726645  209818 cri.go:89] found id: "6924b85ba570a30f851ea60cbaf6498eaf85975f8883a16972cc0b614db3ae1a"
	I1018 18:24:02.726663  209818 cri.go:89] found id: "62679bc7de3d96ea22f1ec1fe03d9713354a7daeddddb6d200b8b0232b3a9220"
	I1018 18:24:02.726679  209818 cri.go:89] found id: "79df2c9185f91fe68153976c279dd9aaa7775b92571473ea614f468db51721de"
	I1018 18:24:02.726689  209818 cri.go:89] found id: "54c91171549a4e5775393d4768d527b1d8e22fd30e268495e4d3b6100ec319a5"
	I1018 18:24:02.726721  209818 cri.go:89] found id: "b95b10640928c3fde63ab1dc3d1f20d7b8532a3c6bb09b5b79dd506a1cada9c2"
	I1018 18:24:02.726737  209818 cri.go:89] found id: "01370573ce2751c5a9bc9cf2c5b653ed64758dd3e91f3aec786b7f16d88bf722"
	I1018 18:24:02.726751  209818 cri.go:89] found id: "a40f2fadeda1857088554cfe73930b819e69cca05e8a65552a5d8d7bb7b5946d"
	I1018 18:24:02.726755  209818 cri.go:89] found id: "23e7b4f21a923153503f5d9f363c452579100dd2a260750e3b7a35d6ca8dcb22"
	I1018 18:24:02.726762  209818 cri.go:89] found id: "d724bfd66793bad839089c2ac7e4752e48c341e43c36b8084731177c7fea4183"
	I1018 18:24:02.726767  209818 cri.go:89] found id: "968df3fa8857ba03f182ffe49abd49d62aa437f6426d77631a03400f7324c070"
	I1018 18:24:02.726772  209818 cri.go:89] found id: ""
	I1018 18:24:02.726836  209818 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 18:24:02.743844  209818 out.go:203] 
	W1018 18:24:02.746823  209818 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:24:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:24:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 18:24:02.746853  209818 out.go:285] * 
	* 
	W1018 18:24:02.752286  209818 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 18:24:02.755178  209818 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-192562 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-192562
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-192562:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c0a8933c552c9d4e5fb4ca01ca33c573463079ebfb6960b8ac96dc752d5faeaa",
	        "Created": "2025-10-18T18:21:07.306681967Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 204791,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T18:22:53.359425827Z",
	            "FinishedAt": "2025-10-18T18:22:52.553707144Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/c0a8933c552c9d4e5fb4ca01ca33c573463079ebfb6960b8ac96dc752d5faeaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c0a8933c552c9d4e5fb4ca01ca33c573463079ebfb6960b8ac96dc752d5faeaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/c0a8933c552c9d4e5fb4ca01ca33c573463079ebfb6960b8ac96dc752d5faeaa/hosts",
	        "LogPath": "/var/lib/docker/containers/c0a8933c552c9d4e5fb4ca01ca33c573463079ebfb6960b8ac96dc752d5faeaa/c0a8933c552c9d4e5fb4ca01ca33c573463079ebfb6960b8ac96dc752d5faeaa-json.log",
	        "Name": "/default-k8s-diff-port-192562",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-192562:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-192562",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c0a8933c552c9d4e5fb4ca01ca33c573463079ebfb6960b8ac96dc752d5faeaa",
	                "LowerDir": "/var/lib/docker/overlay2/dee070e079682e34299d25230ff60b4454bdeead13a662fbf9dd6a74e43397c1-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dee070e079682e34299d25230ff60b4454bdeead13a662fbf9dd6a74e43397c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dee070e079682e34299d25230ff60b4454bdeead13a662fbf9dd6a74e43397c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dee070e079682e34299d25230ff60b4454bdeead13a662fbf9dd6a74e43397c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-192562",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-192562/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-192562",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-192562",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-192562",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b16cf4347e390a1cfb7e1d73af3f36a2fcccba03bd6a8fdcd4614395eeb04d65",
	            "SandboxKey": "/var/run/docker/netns/b16cf4347e39",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-192562": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:c4:b8:3c:30:48",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "38c20734cd0994956410457c1029d2a36f99d2c176924ac552fc426e5efdac60",
	                    "EndpointID": "a20665a4eb82c63e646d1ab4236e8ac95459e35bf341507067d451e264f41f71",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-192562",
	                        "c0a8933c552c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-192562 -n default-k8s-diff-port-192562
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-192562 -n default-k8s-diff-port-192562: exit status 2 (385.277964ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-192562 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-192562 logs -n 25: (1.366975468s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-327418 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-327418          │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:18 UTC │
	│ delete  │ -p cert-options-327418                                                                                                                                                                                                                        │ cert-options-327418          │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:18 UTC │
	│ start   │ -p old-k8s-version-918475 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:19 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-918475 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │                     │
	│ stop    │ -p old-k8s-version-918475 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │ 18 Oct 25 18:19 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-918475 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │ 18 Oct 25 18:19 UTC │
	│ start   │ -p old-k8s-version-918475 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │ 18 Oct 25 18:20 UTC │
	│ image   │ old-k8s-version-918475 image list --format=json                                                                                                                                                                                               │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:20 UTC │ 18 Oct 25 18:20 UTC │
	│ pause   │ -p old-k8s-version-918475 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:20 UTC │                     │
	│ delete  │ -p old-k8s-version-918475                                                                                                                                                                                                                     │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:20 UTC │ 18 Oct 25 18:21 UTC │
	│ start   │ -p cert-expiration-463770 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-463770       │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:21 UTC │
	│ delete  │ -p old-k8s-version-918475                                                                                                                                                                                                                     │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:21 UTC │
	│ start   │ -p default-k8s-diff-port-192562 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:22 UTC │
	│ delete  │ -p cert-expiration-463770                                                                                                                                                                                                                     │ cert-expiration-463770       │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:21 UTC │
	│ start   │ -p embed-certs-213943 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-192562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-192562 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │ 18 Oct 25 18:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-192562 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │ 18 Oct 25 18:22 UTC │
	│ start   │ -p default-k8s-diff-port-192562 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │ 18 Oct 25 18:23 UTC │
	│ addons  │ enable metrics-server -p embed-certs-213943 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │                     │
	│ stop    │ -p embed-certs-213943 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │ 18 Oct 25 18:23 UTC │
	│ addons  │ enable dashboard -p embed-certs-213943 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │ 18 Oct 25 18:23 UTC │
	│ start   │ -p embed-certs-213943 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │                     │
	│ image   │ default-k8s-diff-port-192562 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ pause   │ -p default-k8s-diff-port-192562 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 18:23:24
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 18:23:24.617795  207600 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:23:24.617928  207600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:23:24.617938  207600 out.go:374] Setting ErrFile to fd 2...
	I1018 18:23:24.617943  207600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:23:24.618207  207600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:23:24.618577  207600 out.go:368] Setting JSON to false
	I1018 18:23:24.619524  207600 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7554,"bootTime":1760804251,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 18:23:24.619605  207600 start.go:141] virtualization:  
	I1018 18:23:24.622697  207600 out.go:179] * [embed-certs-213943] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 18:23:24.626595  207600 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 18:23:24.626663  207600 notify.go:220] Checking for updates...
	I1018 18:23:24.632599  207600 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 18:23:24.635767  207600 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:23:24.638863  207600 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 18:23:24.641819  207600 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 18:23:24.644727  207600 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 18:23:24.648300  207600 config.go:182] Loaded profile config "embed-certs-213943": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:23:24.649141  207600 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 18:23:24.683136  207600 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 18:23:24.683265  207600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:23:24.741288  207600 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 18:23:24.731264595 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:23:24.741397  207600 docker.go:318] overlay module found
	I1018 18:23:24.744645  207600 out.go:179] * Using the docker driver based on existing profile
	I1018 18:23:24.747502  207600 start.go:305] selected driver: docker
	I1018 18:23:24.747526  207600 start.go:925] validating driver "docker" against &{Name:embed-certs-213943 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-213943 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:23:24.747642  207600 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 18:23:24.748359  207600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:23:24.809329  207600 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 18:23:24.799788108 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:23:24.809666  207600 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:23:24.809702  207600 cni.go:84] Creating CNI manager for ""
	I1018 18:23:24.809757  207600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:23:24.809811  207600 start.go:349] cluster config:
	{Name:embed-certs-213943 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-213943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:23:24.813145  207600 out.go:179] * Starting "embed-certs-213943" primary control-plane node in "embed-certs-213943" cluster
	I1018 18:23:24.816108  207600 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 18:23:24.819046  207600 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 18:23:24.821780  207600 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:23:24.821835  207600 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 18:23:24.821847  207600 cache.go:58] Caching tarball of preloaded images
	I1018 18:23:24.821885  207600 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 18:23:24.821988  207600 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 18:23:24.822000  207600 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 18:23:24.822124  207600 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/config.json ...
	I1018 18:23:24.849234  207600 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 18:23:24.849255  207600 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 18:23:24.849275  207600 cache.go:232] Successfully downloaded all kic artifacts
	I1018 18:23:24.849297  207600 start.go:360] acquireMachinesLock for embed-certs-213943: {Name:mk6236f8122624f68835f4877bda621eb0a7ae61 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:23:24.849363  207600 start.go:364] duration metric: took 42.347µs to acquireMachinesLock for "embed-certs-213943"
	I1018 18:23:24.849384  207600 start.go:96] Skipping create...Using existing machine configuration
	I1018 18:23:24.849393  207600 fix.go:54] fixHost starting: 
	I1018 18:23:24.849653  207600 cli_runner.go:164] Run: docker container inspect embed-certs-213943 --format={{.State.Status}}
	I1018 18:23:24.866587  207600 fix.go:112] recreateIfNeeded on embed-certs-213943: state=Stopped err=<nil>
	W1018 18:23:24.866618  207600 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 18:23:24.911310  204660 pod_ready.go:104] pod "coredns-66bc5c9577-psj29" is not "Ready", error: <nil>
	W1018 18:23:26.911823  204660 pod_ready.go:104] pod "coredns-66bc5c9577-psj29" is not "Ready", error: <nil>
	I1018 18:23:24.869846  207600 out.go:252] * Restarting existing docker container for "embed-certs-213943" ...
	I1018 18:23:24.869934  207600 cli_runner.go:164] Run: docker start embed-certs-213943
	I1018 18:23:25.151279  207600 cli_runner.go:164] Run: docker container inspect embed-certs-213943 --format={{.State.Status}}
	I1018 18:23:25.173255  207600 kic.go:430] container "embed-certs-213943" state is running.
	I1018 18:23:25.173639  207600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-213943
	I1018 18:23:25.196763  207600 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/config.json ...
	I1018 18:23:25.197018  207600 machine.go:93] provisionDockerMachine start ...
	I1018 18:23:25.197092  207600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:23:25.219843  207600 main.go:141] libmachine: Using SSH client type: native
	I1018 18:23:25.220174  207600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1018 18:23:25.220184  207600 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 18:23:25.220894  207600 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33716->127.0.0.1:33068: read: connection reset by peer
	I1018 18:23:28.368607  207600 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-213943
	
	I1018 18:23:28.368643  207600 ubuntu.go:182] provisioning hostname "embed-certs-213943"
	I1018 18:23:28.368706  207600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:23:28.388844  207600 main.go:141] libmachine: Using SSH client type: native
	I1018 18:23:28.389197  207600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1018 18:23:28.389218  207600 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-213943 && echo "embed-certs-213943" | sudo tee /etc/hostname
	I1018 18:23:28.550268  207600 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-213943
	
	I1018 18:23:28.550353  207600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:23:28.571684  207600 main.go:141] libmachine: Using SSH client type: native
	I1018 18:23:28.572041  207600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1018 18:23:28.572066  207600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-213943' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-213943/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-213943' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 18:23:28.721224  207600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 18:23:28.721247  207600 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 18:23:28.721280  207600 ubuntu.go:190] setting up certificates
	I1018 18:23:28.721290  207600 provision.go:84] configureAuth start
	I1018 18:23:28.721349  207600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-213943
	I1018 18:23:28.738348  207600 provision.go:143] copyHostCerts
	I1018 18:23:28.738424  207600 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 18:23:28.738542  207600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 18:23:28.738637  207600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 18:23:28.738765  207600 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 18:23:28.738775  207600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 18:23:28.738804  207600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 18:23:28.738864  207600 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 18:23:28.738872  207600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 18:23:28.738896  207600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 18:23:28.738959  207600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.embed-certs-213943 san=[127.0.0.1 192.168.85.2 embed-certs-213943 localhost minikube]
	I1018 18:23:29.032199  207600 provision.go:177] copyRemoteCerts
	I1018 18:23:29.032299  207600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 18:23:29.032346  207600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:23:29.049922  207600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa Username:docker}
	I1018 18:23:29.153048  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 18:23:29.170942  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 18:23:29.188389  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1018 18:23:29.206135  207600 provision.go:87] duration metric: took 484.832123ms to configureAuth
	I1018 18:23:29.206159  207600 ubuntu.go:206] setting minikube options for container-runtime
	I1018 18:23:29.206349  207600 config.go:182] Loaded profile config "embed-certs-213943": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:23:29.206456  207600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:23:29.226207  207600 main.go:141] libmachine: Using SSH client type: native
	I1018 18:23:29.226509  207600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1018 18:23:29.226523  207600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 18:23:29.546737  207600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 18:23:29.546758  207600 machine.go:96] duration metric: took 4.349730235s to provisionDockerMachine
	I1018 18:23:29.546769  207600 start.go:293] postStartSetup for "embed-certs-213943" (driver="docker")
	I1018 18:23:29.546780  207600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 18:23:29.546857  207600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 18:23:29.546906  207600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:23:29.575914  207600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa Username:docker}
	I1018 18:23:29.680784  207600 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 18:23:29.683971  207600 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 18:23:29.684002  207600 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 18:23:29.684013  207600 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 18:23:29.684066  207600 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 18:23:29.684148  207600 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 18:23:29.684261  207600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 18:23:29.691612  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:23:29.708867  207600 start.go:296] duration metric: took 162.071183ms for postStartSetup
	I1018 18:23:29.708973  207600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 18:23:29.709016  207600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:23:29.725660  207600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa Username:docker}
	I1018 18:23:29.826114  207600 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 18:23:29.830971  207600 fix.go:56] duration metric: took 4.981572653s for fixHost
	I1018 18:23:29.830996  207600 start.go:83] releasing machines lock for "embed-certs-213943", held for 4.981621885s
	I1018 18:23:29.831076  207600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-213943
	I1018 18:23:29.848133  207600 ssh_runner.go:195] Run: cat /version.json
	I1018 18:23:29.848188  207600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:23:29.848457  207600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 18:23:29.848508  207600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:23:29.866263  207600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa Username:docker}
	I1018 18:23:29.869026  207600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa Username:docker}
	I1018 18:23:29.969727  207600 ssh_runner.go:195] Run: systemctl --version
	I1018 18:23:30.067036  207600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 18:23:30.107575  207600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 18:23:30.112571  207600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 18:23:30.112651  207600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 18:23:30.121843  207600 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 18:23:30.121908  207600 start.go:495] detecting cgroup driver to use...
	I1018 18:23:30.121947  207600 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 18:23:30.122016  207600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 18:23:30.138155  207600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 18:23:30.152077  207600 docker.go:218] disabling cri-docker service (if available) ...
	I1018 18:23:30.152166  207600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 18:23:30.168898  207600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 18:23:30.181883  207600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 18:23:30.293824  207600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 18:23:30.416205  207600 docker.go:234] disabling docker service ...
	I1018 18:23:30.416269  207600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 18:23:30.431366  207600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 18:23:30.444641  207600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 18:23:30.561989  207600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 18:23:30.700270  207600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 18:23:30.713230  207600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 18:23:30.730201  207600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 18:23:30.730307  207600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:23:30.741647  207600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 18:23:30.741757  207600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:23:30.750966  207600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:23:30.760165  207600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:23:30.769385  207600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 18:23:30.778084  207600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:23:30.787525  207600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:23:30.796415  207600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:23:30.805661  207600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 18:23:30.813058  207600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 18:23:30.820430  207600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:23:30.946384  207600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 18:23:31.099326  207600 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 18:23:31.099417  207600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 18:23:31.103712  207600 start.go:563] Will wait 60s for crictl version
	I1018 18:23:31.103821  207600 ssh_runner.go:195] Run: which crictl
	I1018 18:23:31.107578  207600 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 18:23:31.136355  207600 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 18:23:31.136508  207600 ssh_runner.go:195] Run: crio --version
	I1018 18:23:31.166012  207600 ssh_runner.go:195] Run: crio --version
	I1018 18:23:31.198710  207600 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 18:23:31.201635  207600 cli_runner.go:164] Run: docker network inspect embed-certs-213943 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:23:31.218863  207600 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 18:23:31.223869  207600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:23:31.233707  207600 kubeadm.go:883] updating cluster {Name:embed-certs-213943 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-213943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 18:23:31.233833  207600 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:23:31.233900  207600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:23:31.271684  207600 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:23:31.271705  207600 crio.go:433] Images already preloaded, skipping extraction
	I1018 18:23:31.271763  207600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:23:31.300978  207600 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:23:31.301002  207600 cache_images.go:85] Images are preloaded, skipping loading
	I1018 18:23:31.301022  207600 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 18:23:31.301132  207600 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-213943 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-213943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 18:23:31.301217  207600 ssh_runner.go:195] Run: crio config
	I1018 18:23:31.381373  207600 cni.go:84] Creating CNI manager for ""
	I1018 18:23:31.381395  207600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:23:31.381415  207600 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 18:23:31.381438  207600 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-213943 NodeName:embed-certs-213943 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 18:23:31.381564  207600 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-213943"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 18:23:31.381635  207600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 18:23:31.389389  207600 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 18:23:31.389458  207600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 18:23:31.397101  207600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 18:23:31.412825  207600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 18:23:31.429853  207600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1018 18:23:31.443522  207600 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 18:23:31.447278  207600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:23:31.457800  207600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:23:31.589117  207600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:23:31.607623  207600 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943 for IP: 192.168.85.2
	I1018 18:23:31.607700  207600 certs.go:195] generating shared ca certs ...
	I1018 18:23:31.607730  207600 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:23:31.607920  207600 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 18:23:31.607992  207600 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 18:23:31.608014  207600 certs.go:257] generating profile certs ...
	I1018 18:23:31.608130  207600 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/client.key
	I1018 18:23:31.608217  207600 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/apiserver.key.b72dfec4
	I1018 18:23:31.608289  207600 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/proxy-client.key
	I1018 18:23:31.608434  207600 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 18:23:31.608490  207600 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 18:23:31.608518  207600 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 18:23:31.608574  207600 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 18:23:31.608623  207600 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 18:23:31.608687  207600 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 18:23:31.608754  207600 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:23:31.609443  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 18:23:31.634278  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 18:23:31.656855  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 18:23:31.679745  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 18:23:31.708395  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 18:23:31.737622  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 18:23:31.762577  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 18:23:31.786531  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 18:23:31.811959  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 18:23:31.834308  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 18:23:31.853365  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 18:23:31.872882  207600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 18:23:31.885942  207600 ssh_runner.go:195] Run: openssl version
	I1018 18:23:31.892138  207600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 18:23:31.900280  207600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 18:23:31.903980  207600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 18:23:31.904066  207600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 18:23:31.946917  207600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 18:23:31.955406  207600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 18:23:31.963808  207600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:23:31.967482  207600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:23:31.967573  207600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:23:32.008880  207600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 18:23:32.017730  207600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 18:23:32.026412  207600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 18:23:32.030712  207600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 18:23:32.030823  207600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 18:23:32.072118  207600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 18:23:32.080782  207600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 18:23:32.084655  207600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 18:23:32.126495  207600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 18:23:32.167951  207600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 18:23:32.214199  207600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 18:23:32.260663  207600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 18:23:32.319590  207600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 18:23:32.387077  207600 kubeadm.go:400] StartCluster: {Name:embed-certs-213943 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-213943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:23:32.387232  207600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 18:23:32.387329  207600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 18:23:32.475629  207600 cri.go:89] found id: "97b7723e6cc93259a63a7dc305c6dd7a4974876e6dc283507e6d8ce5af737bcb"
	I1018 18:23:32.475696  207600 cri.go:89] found id: "9ae5471fee776db561d720631098bdc12432bd23b92d88eb2d07deb57fed51ac"
	I1018 18:23:32.475716  207600 cri.go:89] found id: "579b2e90159d3f472f72b4d74cead642311dbb50b6aa56372bed6e44fa5f0026"
	I1018 18:23:32.475735  207600 cri.go:89] found id: "320b2b6a0f723790bef132bc7d46d0c55becfa751e8cd836c15cde5c23b0446d"
	I1018 18:23:32.475767  207600 cri.go:89] found id: ""
	I1018 18:23:32.475848  207600 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 18:23:32.495337  207600 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:23:32Z" level=error msg="open /run/runc: no such file or directory"
	I1018 18:23:32.495510  207600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 18:23:32.509107  207600 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 18:23:32.509176  207600 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 18:23:32.509255  207600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 18:23:32.521736  207600 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 18:23:32.522438  207600 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-213943" does not appear in /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:23:32.522771  207600 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-2509/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-213943" cluster setting kubeconfig missing "embed-certs-213943" context setting]
	I1018 18:23:32.523391  207600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:23:32.525296  207600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 18:23:32.536746  207600 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 18:23:32.536830  207600 kubeadm.go:601] duration metric: took 27.634758ms to restartPrimaryControlPlane
	I1018 18:23:32.536855  207600 kubeadm.go:402] duration metric: took 149.786808ms to StartCluster
	I1018 18:23:32.536894  207600 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:23:32.536999  207600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:23:32.538501  207600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:23:32.538953  207600 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 18:23:32.539340  207600 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 18:23:32.539421  207600 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-213943"
	I1018 18:23:32.539435  207600 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-213943"
	W1018 18:23:32.539441  207600 addons.go:247] addon storage-provisioner should already be in state true
	I1018 18:23:32.539464  207600 host.go:66] Checking if "embed-certs-213943" exists ...
	I1018 18:23:32.539931  207600 cli_runner.go:164] Run: docker container inspect embed-certs-213943 --format={{.State.Status}}
	I1018 18:23:32.540247  207600 config.go:182] Loaded profile config "embed-certs-213943": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:23:32.540365  207600 addons.go:69] Setting default-storageclass=true in profile "embed-certs-213943"
	I1018 18:23:32.540436  207600 addons.go:69] Setting dashboard=true in profile "embed-certs-213943"
	I1018 18:23:32.540447  207600 addons.go:238] Setting addon dashboard=true in "embed-certs-213943"
	W1018 18:23:32.540453  207600 addons.go:247] addon dashboard should already be in state true
	I1018 18:23:32.540475  207600 host.go:66] Checking if "embed-certs-213943" exists ...
	I1018 18:23:32.540906  207600 cli_runner.go:164] Run: docker container inspect embed-certs-213943 --format={{.State.Status}}
	I1018 18:23:32.541106  207600 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-213943"
	I1018 18:23:32.541403  207600 cli_runner.go:164] Run: docker container inspect embed-certs-213943 --format={{.State.Status}}
	I1018 18:23:32.549110  207600 out.go:179] * Verifying Kubernetes components...
	I1018 18:23:32.557928  207600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:23:32.581043  207600 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:23:32.584507  207600 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:23:32.584533  207600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 18:23:32.584598  207600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:23:32.606047  207600 addons.go:238] Setting addon default-storageclass=true in "embed-certs-213943"
	W1018 18:23:32.606081  207600 addons.go:247] addon default-storageclass should already be in state true
	I1018 18:23:32.606107  207600 host.go:66] Checking if "embed-certs-213943" exists ...
	I1018 18:23:32.606587  207600 cli_runner.go:164] Run: docker container inspect embed-certs-213943 --format={{.State.Status}}
	I1018 18:23:32.645177  207600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa Username:docker}
	I1018 18:23:32.647325  207600 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 18:23:32.651036  207600 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 18:23:32.651054  207600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 18:23:32.651116  207600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:23:32.655644  207600 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1018 18:23:29.413852  204660 pod_ready.go:104] pod "coredns-66bc5c9577-psj29" is not "Ready", error: <nil>
	W1018 18:23:31.912467  204660 pod_ready.go:104] pod "coredns-66bc5c9577-psj29" is not "Ready", error: <nil>
	I1018 18:23:32.662964  207600 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 18:23:32.662994  207600 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 18:23:32.663070  207600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:23:32.679139  207600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa Username:docker}
	I1018 18:23:32.702430  207600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa Username:docker}
	I1018 18:23:32.851884  207600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:23:32.883758  207600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:23:32.886291  207600 node_ready.go:35] waiting up to 6m0s for node "embed-certs-213943" to be "Ready" ...
	I1018 18:23:32.930301  207600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 18:23:33.076581  207600 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 18:23:33.076608  207600 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 18:23:33.170537  207600 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 18:23:33.170609  207600 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 18:23:33.197964  207600 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 18:23:33.198041  207600 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 18:23:33.222478  207600 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 18:23:33.222500  207600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 18:23:33.242355  207600 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 18:23:33.242380  207600 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 18:23:33.270728  207600 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 18:23:33.270762  207600 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 18:23:33.293397  207600 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 18:23:33.293423  207600 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 18:23:33.315173  207600 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 18:23:33.315193  207600 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 18:23:33.341833  207600 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 18:23:33.341857  207600 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 18:23:33.383539  207600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1018 18:23:34.411662  204660 pod_ready.go:104] pod "coredns-66bc5c9577-psj29" is not "Ready", error: <nil>
	W1018 18:23:36.412895  204660 pod_ready.go:104] pod "coredns-66bc5c9577-psj29" is not "Ready", error: <nil>
	I1018 18:23:37.899940  207600 node_ready.go:49] node "embed-certs-213943" is "Ready"
	I1018 18:23:37.900021  207600 node_ready.go:38] duration metric: took 5.013667626s for node "embed-certs-213943" to be "Ready" ...
	I1018 18:23:37.900056  207600 api_server.go:52] waiting for apiserver process to appear ...
	I1018 18:23:37.900160  207600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 18:23:39.795977  207600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.912139699s)
	I1018 18:23:39.796052  207600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.865718279s)
	I1018 18:23:39.854969  207600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.471386295s)
	I1018 18:23:39.855195  207600 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.955005255s)
	I1018 18:23:39.855234  207600 api_server.go:72] duration metric: took 7.316213602s to wait for apiserver process to appear ...
	I1018 18:23:39.855246  207600 api_server.go:88] waiting for apiserver healthz status ...
	I1018 18:23:39.855264  207600 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 18:23:39.858431  207600 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-213943 addons enable metrics-server
	
	I1018 18:23:39.861259  207600 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1018 18:23:38.414207  204660 pod_ready.go:104] pod "coredns-66bc5c9577-psj29" is not "Ready", error: <nil>
	W1018 18:23:40.911359  204660 pod_ready.go:104] pod "coredns-66bc5c9577-psj29" is not "Ready", error: <nil>
	W1018 18:23:42.916002  204660 pod_ready.go:104] pod "coredns-66bc5c9577-psj29" is not "Ready", error: <nil>
	I1018 18:23:39.864807  207600 addons.go:514] duration metric: took 7.325459308s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1018 18:23:39.865699  207600 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 18:23:39.865725  207600 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 18:23:40.355972  207600 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 18:23:40.364479  207600 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 18:23:40.365602  207600 api_server.go:141] control plane version: v1.34.1
	I1018 18:23:40.365634  207600 api_server.go:131] duration metric: took 510.380063ms to wait for apiserver health ...
	I1018 18:23:40.365643  207600 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 18:23:40.372362  207600 system_pods.go:59] 8 kube-system pods found
	I1018 18:23:40.372404  207600 system_pods.go:61] "coredns-66bc5c9577-grf2z" [0a6125b1-a0eb-4600-9b53-35017d6ee21b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:23:40.372423  207600 system_pods.go:61] "etcd-embed-certs-213943" [8b55657c-393f-48c1-9a5d-6ab96021decb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 18:23:40.372430  207600 system_pods.go:61] "kindnet-44fc8" [b35c637a-9afc-46ee-93dd-89db133869e9] Running
	I1018 18:23:40.372438  207600 system_pods.go:61] "kube-apiserver-embed-certs-213943" [e615020d-5cc5-4e06-8605-21cfcd9b1750] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 18:23:40.372449  207600 system_pods.go:61] "kube-controller-manager-embed-certs-213943" [01383f1b-63a2-47e1-8946-f987e9bcee73] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 18:23:40.372454  207600 system_pods.go:61] "kube-proxy-gcf8n" [0f81c7f5-8e47-4826-bdb3-867782c394a7] Running
	I1018 18:23:40.372467  207600 system_pods.go:61] "kube-scheduler-embed-certs-213943" [216b830a-b447-408c-a3d1-7233624d11a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 18:23:40.372472  207600 system_pods.go:61] "storage-provisioner" [8b4837a6-135d-4719-b80f-0e37d07f3fe4] Running
	I1018 18:23:40.372479  207600 system_pods.go:74] duration metric: took 6.830036ms to wait for pod list to return data ...
	I1018 18:23:40.372498  207600 default_sa.go:34] waiting for default service account to be created ...
	I1018 18:23:40.375449  207600 default_sa.go:45] found service account: "default"
	I1018 18:23:40.375474  207600 default_sa.go:55] duration metric: took 2.963902ms for default service account to be created ...
	I1018 18:23:40.375483  207600 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 18:23:40.379770  207600 system_pods.go:86] 8 kube-system pods found
	I1018 18:23:40.379809  207600 system_pods.go:89] "coredns-66bc5c9577-grf2z" [0a6125b1-a0eb-4600-9b53-35017d6ee21b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:23:40.379819  207600 system_pods.go:89] "etcd-embed-certs-213943" [8b55657c-393f-48c1-9a5d-6ab96021decb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 18:23:40.379824  207600 system_pods.go:89] "kindnet-44fc8" [b35c637a-9afc-46ee-93dd-89db133869e9] Running
	I1018 18:23:40.379831  207600 system_pods.go:89] "kube-apiserver-embed-certs-213943" [e615020d-5cc5-4e06-8605-21cfcd9b1750] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 18:23:40.379838  207600 system_pods.go:89] "kube-controller-manager-embed-certs-213943" [01383f1b-63a2-47e1-8946-f987e9bcee73] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 18:23:40.379842  207600 system_pods.go:89] "kube-proxy-gcf8n" [0f81c7f5-8e47-4826-bdb3-867782c394a7] Running
	I1018 18:23:40.379849  207600 system_pods.go:89] "kube-scheduler-embed-certs-213943" [216b830a-b447-408c-a3d1-7233624d11a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 18:23:40.379853  207600 system_pods.go:89] "storage-provisioner" [8b4837a6-135d-4719-b80f-0e37d07f3fe4] Running
	I1018 18:23:40.379861  207600 system_pods.go:126] duration metric: took 4.372027ms to wait for k8s-apps to be running ...
	I1018 18:23:40.379872  207600 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 18:23:40.379933  207600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:23:40.420301  207600 system_svc.go:56] duration metric: took 40.420654ms WaitForService to wait for kubelet
	I1018 18:23:40.420332  207600 kubeadm.go:586] duration metric: took 7.881311265s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:23:40.420352  207600 node_conditions.go:102] verifying NodePressure condition ...
	I1018 18:23:40.423527  207600 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 18:23:40.423559  207600 node_conditions.go:123] node cpu capacity is 2
	I1018 18:23:40.423572  207600 node_conditions.go:105] duration metric: took 3.21495ms to run NodePressure ...
	I1018 18:23:40.423583  207600 start.go:241] waiting for startup goroutines ...
	I1018 18:23:40.423595  207600 start.go:246] waiting for cluster config update ...
	I1018 18:23:40.423606  207600 start.go:255] writing updated cluster config ...
	I1018 18:23:40.423902  207600 ssh_runner.go:195] Run: rm -f paused
	I1018 18:23:40.429429  207600 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:23:40.471097  207600 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-grf2z" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 18:23:42.477314  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	W1018 18:23:44.478667  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	W1018 18:23:45.412491  204660 pod_ready.go:104] pod "coredns-66bc5c9577-psj29" is not "Ready", error: <nil>
	I1018 18:23:46.915332  204660 pod_ready.go:94] pod "coredns-66bc5c9577-psj29" is "Ready"
	I1018 18:23:46.915420  204660 pod_ready.go:86] duration metric: took 38.009682628s for pod "coredns-66bc5c9577-psj29" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:23:46.924316  204660 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-192562" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:23:46.937598  204660 pod_ready.go:94] pod "etcd-default-k8s-diff-port-192562" is "Ready"
	I1018 18:23:46.937677  204660 pod_ready.go:86] duration metric: took 13.267969ms for pod "etcd-default-k8s-diff-port-192562" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:23:46.946201  204660 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-192562" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:23:46.956804  204660 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-192562" is "Ready"
	I1018 18:23:46.956832  204660 pod_ready.go:86] duration metric: took 10.603697ms for pod "kube-apiserver-default-k8s-diff-port-192562" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:23:46.964692  204660 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-192562" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:23:47.110571  204660 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-192562" is "Ready"
	I1018 18:23:47.110603  204660 pod_ready.go:86] duration metric: took 145.882865ms for pod "kube-controller-manager-default-k8s-diff-port-192562" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:23:47.310686  204660 pod_ready.go:83] waiting for pod "kube-proxy-c7jft" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:23:47.708611  204660 pod_ready.go:94] pod "kube-proxy-c7jft" is "Ready"
	I1018 18:23:47.708641  204660 pod_ready.go:86] duration metric: took 397.929702ms for pod "kube-proxy-c7jft" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:23:47.909940  204660 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-192562" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:23:48.310403  204660 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-192562" is "Ready"
	I1018 18:23:48.310431  204660 pod_ready.go:86] duration metric: took 400.468229ms for pod "kube-scheduler-default-k8s-diff-port-192562" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:23:48.310443  204660 pod_ready.go:40] duration metric: took 39.409982667s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:23:48.395045  204660 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 18:23:48.398026  204660 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-192562" cluster and "default" namespace by default
	W1018 18:23:46.981624  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	W1018 18:23:49.478627  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	W1018 18:23:51.980872  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	W1018 18:23:54.477014  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	W1018 18:23:56.976981  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	W1018 18:23:58.978463  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 18 18:23:47 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:47.853143247Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:23:47 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:47.859557426Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:23:47 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:47.859593603Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:23:47 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:47.85961817Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:23:47 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:47.862765574Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:23:47 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:47.862801792Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:23:47 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:47.86284953Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:23:47 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:47.866198447Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:23:47 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:47.866228617Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:23:47 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:47.866255539Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:23:47 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:47.869351685Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:23:47 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:47.869387928Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:23:56 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:56.997594714Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cccc34d6-9719-4971-b9c1-b6d57d70c151 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:23:56 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:56.999225439Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fb28a74e-820f-4d66-acec-4ead633e1321 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:23:57 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:57.001244033Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz7jc/dashboard-metrics-scraper" id=29e78b38-c785-4b00-8c98-c6e4a50fe7e5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:23:57 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:57.00155136Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:23:57 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:57.011199566Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:23:57 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:57.011807491Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:23:57 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:57.034645479Z" level=info msg="Created container d724bfd66793bad839089c2ac7e4752e48c341e43c36b8084731177c7fea4183: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz7jc/dashboard-metrics-scraper" id=29e78b38-c785-4b00-8c98-c6e4a50fe7e5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:23:57 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:57.035992105Z" level=info msg="Starting container: d724bfd66793bad839089c2ac7e4752e48c341e43c36b8084731177c7fea4183" id=84257cc1-8269-4e32-ac4f-772ca2ae07ae name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:23:57 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:57.038889431Z" level=info msg="Started container" PID=1716 containerID=d724bfd66793bad839089c2ac7e4752e48c341e43c36b8084731177c7fea4183 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz7jc/dashboard-metrics-scraper id=84257cc1-8269-4e32-ac4f-772ca2ae07ae name=/runtime.v1.RuntimeService/StartContainer sandboxID=195e46e24830a56f539cdc92ba914d4cf5b224fbd9f59f2e07a0c3cac7c7318a
	Oct 18 18:23:57 default-k8s-diff-port-192562 conmon[1712]: conmon d724bfd66793bad83908 <ninfo>: container 1716 exited with status 1
	Oct 18 18:23:57 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:57.300174607Z" level=info msg="Removing container: bb0db0af8dc67dd37db129ff5dd70a0a4e0681fa835778fb98bdcf2cc0ac52ee" id=6b3c1ab8-8395-4002-b175-f84c56841466 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 18:23:57 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:57.314210085Z" level=info msg="Error loading conmon cgroup of container bb0db0af8dc67dd37db129ff5dd70a0a4e0681fa835778fb98bdcf2cc0ac52ee: cgroup deleted" id=6b3c1ab8-8395-4002-b175-f84c56841466 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 18:23:57 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:57.318504401Z" level=info msg="Removed container bb0db0af8dc67dd37db129ff5dd70a0a4e0681fa835778fb98bdcf2cc0ac52ee: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz7jc/dashboard-metrics-scraper" id=6b3c1ab8-8395-4002-b175-f84c56841466 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	d724bfd66793b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago        Exited              dashboard-metrics-scraper   3                   195e46e24830a       dashboard-metrics-scraper-6ffb444bf9-fz7jc             kubernetes-dashboard
	77450e891d0f8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   4b35db890eb2e       storage-provisioner                                    kube-system
	968df3fa8857b       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   45 seconds ago       Running             kubernetes-dashboard        0                   b9b8de62d3a4d       kubernetes-dashboard-855c9754f9-mq728                  kubernetes-dashboard
	f8bda09d0b7f2       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   a355167ded6a5       busybox                                                default
	6924b85ba570a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           56 seconds ago       Running             coredns                     1                   bee241e681d96       coredns-66bc5c9577-psj29                               kube-system
	62679bc7de3d9       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           56 seconds ago       Running             kube-proxy                  1                   339402e29fff3       kube-proxy-c7jft                                       kube-system
	79df2c9185f91       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           56 seconds ago       Exited              storage-provisioner         1                   4b35db890eb2e       storage-provisioner                                    kube-system
	54c91171549a4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   e9846bd4be128       kindnet-6vrvc                                          kube-system
	b95b10640928c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   45c63acb4cd17       kube-scheduler-default-k8s-diff-port-192562            kube-system
	01370573ce275       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   004965c6bdac6       etcd-default-k8s-diff-port-192562                      kube-system
	a40f2fadeda18       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   9ab229171bf67       kube-controller-manager-default-k8s-diff-port-192562   kube-system
	23e7b4f21a923       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   6b6abce6bd8d6       kube-apiserver-default-k8s-diff-port-192562            kube-system
	
	
	==> coredns [6924b85ba570a30f851ea60cbaf6498eaf85975f8883a16972cc0b614db3ae1a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41573 - 11168 "HINFO IN 7582150822066899225.6180216387346826941. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015492901s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-192562
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-192562
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=default-k8s-diff-port-192562
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T18_21_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 18:21:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-192562
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 18:23:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 18:23:37 +0000   Sat, 18 Oct 2025 18:21:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 18:23:37 +0000   Sat, 18 Oct 2025 18:21:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 18:23:37 +0000   Sat, 18 Oct 2025 18:21:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 18:23:37 +0000   Sat, 18 Oct 2025 18:22:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-192562
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                c4581513-26ed-464e-afab-6c98e6b6fd18
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-psj29                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m20s
	  kube-system                 etcd-default-k8s-diff-port-192562                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m25s
	  kube-system                 kindnet-6vrvc                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m20s
	  kube-system                 kube-apiserver-default-k8s-diff-port-192562             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-192562    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-proxy-c7jft                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-default-k8s-diff-port-192562             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fz7jc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mq728                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m18s                  kube-proxy       
	  Normal   Starting                 55s                    kube-proxy       
	  Normal   Starting                 2m36s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m36s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m35s (x8 over 2m35s)  kubelet          Node default-k8s-diff-port-192562 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m35s (x8 over 2m35s)  kubelet          Node default-k8s-diff-port-192562 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s (x8 over 2m35s)  kubelet          Node default-k8s-diff-port-192562 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m25s                  kubelet          Node default-k8s-diff-port-192562 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m25s                  kubelet          Node default-k8s-diff-port-192562 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m25s                  kubelet          Node default-k8s-diff-port-192562 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m25s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m21s                  node-controller  Node default-k8s-diff-port-192562 event: Registered Node default-k8s-diff-port-192562 in Controller
	  Normal   NodeReady                99s                    kubelet          Node default-k8s-diff-port-192562 status is now: NodeReady
	  Normal   Starting                 64s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 64s)      kubelet          Node default-k8s-diff-port-192562 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 64s)      kubelet          Node default-k8s-diff-port-192562 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 64s)      kubelet          Node default-k8s-diff-port-192562 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           54s                    node-controller  Node default-k8s-diff-port-192562 event: Registered Node default-k8s-diff-port-192562 in Controller
	
	
	==> dmesg <==
	[Oct18 18:01] overlayfs: idmapped layers are currently not supported
	[Oct18 18:02] overlayfs: idmapped layers are currently not supported
	[Oct18 18:04] overlayfs: idmapped layers are currently not supported
	[ +24.403909] overlayfs: idmapped layers are currently not supported
	[  +6.162774] overlayfs: idmapped layers are currently not supported
	[Oct18 18:05] overlayfs: idmapped layers are currently not supported
	[ +25.128760] overlayfs: idmapped layers are currently not supported
	[Oct18 18:06] overlayfs: idmapped layers are currently not supported
	[Oct18 18:07] overlayfs: idmapped layers are currently not supported
	[Oct18 18:08] overlayfs: idmapped layers are currently not supported
	[Oct18 18:09] overlayfs: idmapped layers are currently not supported
	[Oct18 18:11] overlayfs: idmapped layers are currently not supported
	[Oct18 18:13] overlayfs: idmapped layers are currently not supported
	[ +30.969240] overlayfs: idmapped layers are currently not supported
	[Oct18 18:15] overlayfs: idmapped layers are currently not supported
	[Oct18 18:16] overlayfs: idmapped layers are currently not supported
	[Oct18 18:17] overlayfs: idmapped layers are currently not supported
	[ +23.167826] overlayfs: idmapped layers are currently not supported
	[Oct18 18:18] overlayfs: idmapped layers are currently not supported
	[ +38.509809] overlayfs: idmapped layers are currently not supported
	[Oct18 18:19] overlayfs: idmapped layers are currently not supported
	[Oct18 18:21] overlayfs: idmapped layers are currently not supported
	[Oct18 18:22] overlayfs: idmapped layers are currently not supported
	[Oct18 18:23] overlayfs: idmapped layers are currently not supported
	[ +30.822562] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [01370573ce2751c5a9bc9cf2c5b653ed64758dd3e91f3aec786b7f16d88bf722] <==
	{"level":"warn","ts":"2025-10-18T18:23:04.912256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:04.931896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:04.959441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.001861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.022306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.055538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.085885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.102040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.113971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.133393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.154981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.177373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.201033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.210588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.232518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.269667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.273468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.289470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.304563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.326044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.338277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.382335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.408338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.420677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.531225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60566","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:24:04 up  2:06,  0 user,  load average: 2.43, 2.91, 2.73
	Linux default-k8s-diff-port-192562 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [54c91171549a4e5775393d4768d527b1d8e22fd30e268495e4d3b6100ec319a5] <==
	I1018 18:23:07.652927       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 18:23:07.653191       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 18:23:07.653309       1 main.go:148] setting mtu 1500 for CNI 
	I1018 18:23:07.653320       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 18:23:07.653329       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T18:23:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 18:23:07.852722       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 18:23:07.852817       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 18:23:07.852851       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 18:23:07.853691       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 18:23:37.853617       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 18:23:37.853737       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 18:23:37.853831       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 18:23:37.853953       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 18:23:39.353304       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 18:23:39.353348       1 metrics.go:72] Registering metrics
	I1018 18:23:39.353408       1 controller.go:711] "Syncing nftables rules"
	I1018 18:23:47.852791       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 18:23:47.852848       1 main.go:301] handling current node
	I1018 18:23:57.852886       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 18:23:57.852976       1 main.go:301] handling current node
	
	
	==> kube-apiserver [23e7b4f21a923153503f5d9f363c452579100dd2a260750e3b7a35d6ca8dcb22] <==
	I1018 18:23:06.602188       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 18:23:06.602673       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 18:23:06.645984       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 18:23:06.646038       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 18:23:06.646121       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 18:23:06.646150       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 18:23:06.646165       1 policy_source.go:240] refreshing policies
	I1018 18:23:06.646276       1 aggregator.go:171] initial CRD sync complete...
	I1018 18:23:06.646284       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 18:23:06.646290       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 18:23:06.646295       1 cache.go:39] Caches are synced for autoregister controller
	I1018 18:23:06.717847       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 18:23:06.719164       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1018 18:23:06.802260       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 18:23:07.029552       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 18:23:07.146967       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 18:23:07.619728       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 18:23:07.768757       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 18:23:07.838475       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 18:23:07.861987       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 18:23:07.999396       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.29.30"}
	I1018 18:23:08.098412       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.7.198"}
	I1018 18:23:10.949767       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 18:23:11.007868       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 18:23:11.239758       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [a40f2fadeda1857088554cfe73930b819e69cca05e8a65552a5d8d7bb7b5946d] <==
	I1018 18:23:10.739397       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 18:23:10.739501       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 18:23:10.739546       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 18:23:10.739567       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 18:23:10.739576       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 18:23:10.740246       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 18:23:10.740316       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 18:23:10.741382       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 18:23:10.745281       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 18:23:10.745815       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 18:23:10.748201       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 18:23:10.748424       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 18:23:10.751537       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 18:23:10.755626       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 18:23:10.755763       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 18:23:10.757952       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 18:23:10.758082       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 18:23:10.759009       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 18:23:10.760109       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 18:23:10.774193       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 18:23:10.792642       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 18:23:10.792748       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 18:23:10.792780       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 18:23:10.799461       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 18:23:11.277811       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [62679bc7de3d96ea22f1ec1fe03d9713354a7daeddddb6d200b8b0232b3a9220] <==
	I1018 18:23:08.125135       1 server_linux.go:53] "Using iptables proxy"
	I1018 18:23:08.317705       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 18:23:08.477259       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 18:23:08.477293       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 18:23:08.477363       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 18:23:08.496467       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 18:23:08.496522       1 server_linux.go:132] "Using iptables Proxier"
	I1018 18:23:08.501263       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 18:23:08.501585       1 server.go:527] "Version info" version="v1.34.1"
	I1018 18:23:08.501611       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:23:08.505903       1 config.go:106] "Starting endpoint slice config controller"
	I1018 18:23:08.505985       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 18:23:08.506192       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 18:23:08.506212       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 18:23:08.506354       1 config.go:200] "Starting service config controller"
	I1018 18:23:08.506399       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 18:23:08.506455       1 config.go:309] "Starting node config controller"
	I1018 18:23:08.506483       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 18:23:08.606403       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 18:23:08.606411       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 18:23:08.606441       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 18:23:08.606566       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [b95b10640928c3fde63ab1dc3d1f20d7b8532a3c6bb09b5b79dd506a1cada9c2] <==
	I1018 18:23:04.597423       1 serving.go:386] Generated self-signed cert in-memory
	I1018 18:23:08.323621       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 18:23:08.324363       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:23:08.340055       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 18:23:08.340213       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 18:23:08.340276       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 18:23:08.340348       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 18:23:08.345141       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:23:08.345255       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:23:08.347051       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:23:08.347086       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:23:08.441002       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 18:23:08.447366       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:23:08.447387       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 18:23:13 default-k8s-diff-port-192562 kubelet[778]: W1018 18:23:13.426252     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c0a8933c552c9d4e5fb4ca01ca33c573463079ebfb6960b8ac96dc752d5faeaa/crio-195e46e24830a56f539cdc92ba914d4cf5b224fbd9f59f2e07a0c3cac7c7318a WatchSource:0}: Error finding container 195e46e24830a56f539cdc92ba914d4cf5b224fbd9f59f2e07a0c3cac7c7318a: Status 404 returned error can't find the container with id 195e46e24830a56f539cdc92ba914d4cf5b224fbd9f59f2e07a0c3cac7c7318a
	Oct 18 18:23:16 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:16.641794     778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 18:23:19 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:19.425850     778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mq728" podStartSLOduration=3.685476181 podStartE2EDuration="8.425830386s" podCreationTimestamp="2025-10-18 18:23:11 +0000 UTC" firstStartedPulling="2025-10-18 18:23:13.402222776 +0000 UTC m=+12.628962176" lastFinishedPulling="2025-10-18 18:23:18.142576989 +0000 UTC m=+17.369316381" observedRunningTime="2025-10-18 18:23:19.196790706 +0000 UTC m=+18.423530098" watchObservedRunningTime="2025-10-18 18:23:19.425830386 +0000 UTC m=+18.652569786"
	Oct 18 18:23:22 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:22.193863     778 scope.go:117] "RemoveContainer" containerID="eb0020991d91eb03b8211c773249d54bee83a9ac859a085c3c434ef1176f4c67"
	Oct 18 18:23:23 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:23.198645     778 scope.go:117] "RemoveContainer" containerID="eb0020991d91eb03b8211c773249d54bee83a9ac859a085c3c434ef1176f4c67"
	Oct 18 18:23:23 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:23.199254     778 scope.go:117] "RemoveContainer" containerID="b24d0dc1552365fa992da757e3533244a7782f9a010075b9e45e48fca1f40699"
	Oct 18 18:23:23 default-k8s-diff-port-192562 kubelet[778]: E1018 18:23:23.199503     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fz7jc_kubernetes-dashboard(b59a85df-7bc0-4f24-b7c2-4214bd847824)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz7jc" podUID="b59a85df-7bc0-4f24-b7c2-4214bd847824"
	Oct 18 18:23:24 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:24.203088     778 scope.go:117] "RemoveContainer" containerID="b24d0dc1552365fa992da757e3533244a7782f9a010075b9e45e48fca1f40699"
	Oct 18 18:23:24 default-k8s-diff-port-192562 kubelet[778]: E1018 18:23:24.203258     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fz7jc_kubernetes-dashboard(b59a85df-7bc0-4f24-b7c2-4214bd847824)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz7jc" podUID="b59a85df-7bc0-4f24-b7c2-4214bd847824"
	Oct 18 18:23:25 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:25.206389     778 scope.go:117] "RemoveContainer" containerID="b24d0dc1552365fa992da757e3533244a7782f9a010075b9e45e48fca1f40699"
	Oct 18 18:23:25 default-k8s-diff-port-192562 kubelet[778]: E1018 18:23:25.206573     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fz7jc_kubernetes-dashboard(b59a85df-7bc0-4f24-b7c2-4214bd847824)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz7jc" podUID="b59a85df-7bc0-4f24-b7c2-4214bd847824"
	Oct 18 18:23:35 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:35.996179     778 scope.go:117] "RemoveContainer" containerID="b24d0dc1552365fa992da757e3533244a7782f9a010075b9e45e48fca1f40699"
	Oct 18 18:23:36 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:36.234464     778 scope.go:117] "RemoveContainer" containerID="b24d0dc1552365fa992da757e3533244a7782f9a010075b9e45e48fca1f40699"
	Oct 18 18:23:36 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:36.234762     778 scope.go:117] "RemoveContainer" containerID="bb0db0af8dc67dd37db129ff5dd70a0a4e0681fa835778fb98bdcf2cc0ac52ee"
	Oct 18 18:23:36 default-k8s-diff-port-192562 kubelet[778]: E1018 18:23:36.234930     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fz7jc_kubernetes-dashboard(b59a85df-7bc0-4f24-b7c2-4214bd847824)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz7jc" podUID="b59a85df-7bc0-4f24-b7c2-4214bd847824"
	Oct 18 18:23:39 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:39.244317     778 scope.go:117] "RemoveContainer" containerID="79df2c9185f91fe68153976c279dd9aaa7775b92571473ea614f468db51721de"
	Oct 18 18:23:43 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:43.388037     778 scope.go:117] "RemoveContainer" containerID="bb0db0af8dc67dd37db129ff5dd70a0a4e0681fa835778fb98bdcf2cc0ac52ee"
	Oct 18 18:23:43 default-k8s-diff-port-192562 kubelet[778]: E1018 18:23:43.390114     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fz7jc_kubernetes-dashboard(b59a85df-7bc0-4f24-b7c2-4214bd847824)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz7jc" podUID="b59a85df-7bc0-4f24-b7c2-4214bd847824"
	Oct 18 18:23:56 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:56.996895     778 scope.go:117] "RemoveContainer" containerID="bb0db0af8dc67dd37db129ff5dd70a0a4e0681fa835778fb98bdcf2cc0ac52ee"
	Oct 18 18:23:57 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:57.294981     778 scope.go:117] "RemoveContainer" containerID="bb0db0af8dc67dd37db129ff5dd70a0a4e0681fa835778fb98bdcf2cc0ac52ee"
	Oct 18 18:23:57 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:57.295276     778 scope.go:117] "RemoveContainer" containerID="d724bfd66793bad839089c2ac7e4752e48c341e43c36b8084731177c7fea4183"
	Oct 18 18:23:57 default-k8s-diff-port-192562 kubelet[778]: E1018 18:23:57.295431     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fz7jc_kubernetes-dashboard(b59a85df-7bc0-4f24-b7c2-4214bd847824)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz7jc" podUID="b59a85df-7bc0-4f24-b7c2-4214bd847824"
	Oct 18 18:24:01 default-k8s-diff-port-192562 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 18:24:01 default-k8s-diff-port-192562 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 18:24:01 default-k8s-diff-port-192562 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [968df3fa8857ba03f182ffe49abd49d62aa437f6426d77631a03400f7324c070] <==
	2025/10/18 18:23:18 Using namespace: kubernetes-dashboard
	2025/10/18 18:23:18 Using in-cluster config to connect to apiserver
	2025/10/18 18:23:18 Using secret token for csrf signing
	2025/10/18 18:23:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 18:23:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 18:23:18 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 18:23:18 Generating JWE encryption key
	2025/10/18 18:23:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 18:23:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 18:23:18 Initializing JWE encryption key from synchronized object
	2025/10/18 18:23:18 Creating in-cluster Sidecar client
	2025/10/18 18:23:18 Serving insecurely on HTTP port: 9090
	2025/10/18 18:23:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 18:23:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 18:23:18 Starting overwatch
	
	
	==> storage-provisioner [77450e891d0f89a28fe61fe538b628c44c0a3acdc00441daeaf6962e3dc60913] <==
	I1018 18:23:39.389896       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 18:23:39.415169       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 18:23:39.415291       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 18:23:39.421920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:42.878210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:47.139639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:50.738518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:53.792544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:56.814969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:56.820005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 18:23:56.820231       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 18:23:56.820837       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-192562_bfbe0c44-e6b6-443e-9a0c-55914d6c49b1!
	I1018 18:23:56.820699       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca1be163-b867-4e80-ab6a-bbe296c21eb5", APIVersion:"v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-192562_bfbe0c44-e6b6-443e-9a0c-55914d6c49b1 became leader
	W1018 18:23:56.825586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:56.828777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 18:23:56.922105       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-192562_bfbe0c44-e6b6-443e-9a0c-55914d6c49b1!
	W1018 18:23:58.831689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:58.836658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:00.840499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:00.847499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:02.851511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:02.857403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [79df2c9185f91fe68153976c279dd9aaa7775b92571473ea614f468db51721de] <==
	I1018 18:23:08.204784       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 18:23:38.248832       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-192562 -n default-k8s-diff-port-192562
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-192562 -n default-k8s-diff-port-192562: exit status 2 (376.096079ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-192562 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-192562
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-192562:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c0a8933c552c9d4e5fb4ca01ca33c573463079ebfb6960b8ac96dc752d5faeaa",
	        "Created": "2025-10-18T18:21:07.306681967Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 204791,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T18:22:53.359425827Z",
	            "FinishedAt": "2025-10-18T18:22:52.553707144Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/c0a8933c552c9d4e5fb4ca01ca33c573463079ebfb6960b8ac96dc752d5faeaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c0a8933c552c9d4e5fb4ca01ca33c573463079ebfb6960b8ac96dc752d5faeaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/c0a8933c552c9d4e5fb4ca01ca33c573463079ebfb6960b8ac96dc752d5faeaa/hosts",
	        "LogPath": "/var/lib/docker/containers/c0a8933c552c9d4e5fb4ca01ca33c573463079ebfb6960b8ac96dc752d5faeaa/c0a8933c552c9d4e5fb4ca01ca33c573463079ebfb6960b8ac96dc752d5faeaa-json.log",
	        "Name": "/default-k8s-diff-port-192562",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-192562:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-192562",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c0a8933c552c9d4e5fb4ca01ca33c573463079ebfb6960b8ac96dc752d5faeaa",
	                "LowerDir": "/var/lib/docker/overlay2/dee070e079682e34299d25230ff60b4454bdeead13a662fbf9dd6a74e43397c1-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dee070e079682e34299d25230ff60b4454bdeead13a662fbf9dd6a74e43397c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dee070e079682e34299d25230ff60b4454bdeead13a662fbf9dd6a74e43397c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dee070e079682e34299d25230ff60b4454bdeead13a662fbf9dd6a74e43397c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-192562",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-192562/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-192562",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-192562",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-192562",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b16cf4347e390a1cfb7e1d73af3f36a2fcccba03bd6a8fdcd4614395eeb04d65",
	            "SandboxKey": "/var/run/docker/netns/b16cf4347e39",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-192562": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:c4:b8:3c:30:48",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "38c20734cd0994956410457c1029d2a36f99d2c176924ac552fc426e5efdac60",
	                    "EndpointID": "a20665a4eb82c63e646d1ab4236e8ac95459e35bf341507067d451e264f41f71",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-192562",
	                        "c0a8933c552c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-192562 -n default-k8s-diff-port-192562
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-192562 -n default-k8s-diff-port-192562: exit status 2 (364.320706ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-192562 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-192562 logs -n 25: (1.354084924s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-327418 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-327418          │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:18 UTC │
	│ delete  │ -p cert-options-327418                                                                                                                                                                                                                        │ cert-options-327418          │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:18 UTC │
	│ start   │ -p old-k8s-version-918475 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:18 UTC │ 18 Oct 25 18:19 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-918475 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │                     │
	│ stop    │ -p old-k8s-version-918475 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │ 18 Oct 25 18:19 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-918475 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │ 18 Oct 25 18:19 UTC │
	│ start   │ -p old-k8s-version-918475 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │ 18 Oct 25 18:20 UTC │
	│ image   │ old-k8s-version-918475 image list --format=json                                                                                                                                                                                               │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:20 UTC │ 18 Oct 25 18:20 UTC │
	│ pause   │ -p old-k8s-version-918475 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:20 UTC │                     │
	│ delete  │ -p old-k8s-version-918475                                                                                                                                                                                                                     │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:20 UTC │ 18 Oct 25 18:21 UTC │
	│ start   │ -p cert-expiration-463770 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-463770       │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:21 UTC │
	│ delete  │ -p old-k8s-version-918475                                                                                                                                                                                                                     │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:21 UTC │
	│ start   │ -p default-k8s-diff-port-192562 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:22 UTC │
	│ delete  │ -p cert-expiration-463770                                                                                                                                                                                                                     │ cert-expiration-463770       │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:21 UTC │
	│ start   │ -p embed-certs-213943 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-192562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-192562 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │ 18 Oct 25 18:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-192562 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │ 18 Oct 25 18:22 UTC │
	│ start   │ -p default-k8s-diff-port-192562 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │ 18 Oct 25 18:23 UTC │
	│ addons  │ enable metrics-server -p embed-certs-213943 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │                     │
	│ stop    │ -p embed-certs-213943 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │ 18 Oct 25 18:23 UTC │
	│ addons  │ enable dashboard -p embed-certs-213943 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │ 18 Oct 25 18:23 UTC │
	│ start   │ -p embed-certs-213943 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │                     │
	│ image   │ default-k8s-diff-port-192562 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ pause   │ -p default-k8s-diff-port-192562 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 18:23:24
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 18:23:24.617795  207600 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:23:24.617928  207600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:23:24.617938  207600 out.go:374] Setting ErrFile to fd 2...
	I1018 18:23:24.617943  207600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:23:24.618207  207600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:23:24.618577  207600 out.go:368] Setting JSON to false
	I1018 18:23:24.619524  207600 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7554,"bootTime":1760804251,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 18:23:24.619605  207600 start.go:141] virtualization:  
	I1018 18:23:24.622697  207600 out.go:179] * [embed-certs-213943] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 18:23:24.626595  207600 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 18:23:24.626663  207600 notify.go:220] Checking for updates...
	I1018 18:23:24.632599  207600 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 18:23:24.635767  207600 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:23:24.638863  207600 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 18:23:24.641819  207600 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 18:23:24.644727  207600 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 18:23:24.648300  207600 config.go:182] Loaded profile config "embed-certs-213943": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:23:24.649141  207600 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 18:23:24.683136  207600 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 18:23:24.683265  207600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:23:24.741288  207600 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 18:23:24.731264595 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:23:24.741397  207600 docker.go:318] overlay module found
	I1018 18:23:24.744645  207600 out.go:179] * Using the docker driver based on existing profile
	I1018 18:23:24.747502  207600 start.go:305] selected driver: docker
	I1018 18:23:24.747526  207600 start.go:925] validating driver "docker" against &{Name:embed-certs-213943 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-213943 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:23:24.747642  207600 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 18:23:24.748359  207600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:23:24.809329  207600 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 18:23:24.799788108 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:23:24.809666  207600 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:23:24.809702  207600 cni.go:84] Creating CNI manager for ""
	I1018 18:23:24.809757  207600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:23:24.809811  207600 start.go:349] cluster config:
	{Name:embed-certs-213943 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-213943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:23:24.813145  207600 out.go:179] * Starting "embed-certs-213943" primary control-plane node in "embed-certs-213943" cluster
	I1018 18:23:24.816108  207600 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 18:23:24.819046  207600 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 18:23:24.821780  207600 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:23:24.821835  207600 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 18:23:24.821847  207600 cache.go:58] Caching tarball of preloaded images
	I1018 18:23:24.821885  207600 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 18:23:24.821988  207600 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 18:23:24.822000  207600 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 18:23:24.822124  207600 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/config.json ...
	I1018 18:23:24.849234  207600 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 18:23:24.849255  207600 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 18:23:24.849275  207600 cache.go:232] Successfully downloaded all kic artifacts
	I1018 18:23:24.849297  207600 start.go:360] acquireMachinesLock for embed-certs-213943: {Name:mk6236f8122624f68835f4877bda621eb0a7ae61 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:23:24.849363  207600 start.go:364] duration metric: took 42.347µs to acquireMachinesLock for "embed-certs-213943"
	I1018 18:23:24.849384  207600 start.go:96] Skipping create...Using existing machine configuration
	I1018 18:23:24.849393  207600 fix.go:54] fixHost starting: 
	I1018 18:23:24.849653  207600 cli_runner.go:164] Run: docker container inspect embed-certs-213943 --format={{.State.Status}}
	I1018 18:23:24.866587  207600 fix.go:112] recreateIfNeeded on embed-certs-213943: state=Stopped err=<nil>
	W1018 18:23:24.866618  207600 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 18:23:24.911310  204660 pod_ready.go:104] pod "coredns-66bc5c9577-psj29" is not "Ready", error: <nil>
	W1018 18:23:26.911823  204660 pod_ready.go:104] pod "coredns-66bc5c9577-psj29" is not "Ready", error: <nil>
	I1018 18:23:24.869846  207600 out.go:252] * Restarting existing docker container for "embed-certs-213943" ...
	I1018 18:23:24.869934  207600 cli_runner.go:164] Run: docker start embed-certs-213943
	I1018 18:23:25.151279  207600 cli_runner.go:164] Run: docker container inspect embed-certs-213943 --format={{.State.Status}}
	I1018 18:23:25.173255  207600 kic.go:430] container "embed-certs-213943" state is running.
	I1018 18:23:25.173639  207600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-213943
	I1018 18:23:25.196763  207600 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/config.json ...
	I1018 18:23:25.197018  207600 machine.go:93] provisionDockerMachine start ...
	I1018 18:23:25.197092  207600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:23:25.219843  207600 main.go:141] libmachine: Using SSH client type: native
	I1018 18:23:25.220174  207600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1018 18:23:25.220184  207600 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 18:23:25.220894  207600 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33716->127.0.0.1:33068: read: connection reset by peer
	I1018 18:23:28.368607  207600 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-213943
	
	I1018 18:23:28.368643  207600 ubuntu.go:182] provisioning hostname "embed-certs-213943"
	I1018 18:23:28.368706  207600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:23:28.388844  207600 main.go:141] libmachine: Using SSH client type: native
	I1018 18:23:28.389197  207600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1018 18:23:28.389218  207600 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-213943 && echo "embed-certs-213943" | sudo tee /etc/hostname
	I1018 18:23:28.550268  207600 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-213943
	
	I1018 18:23:28.550353  207600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:23:28.571684  207600 main.go:141] libmachine: Using SSH client type: native
	I1018 18:23:28.572041  207600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1018 18:23:28.572066  207600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-213943' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-213943/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-213943' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 18:23:28.721224  207600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 18:23:28.721247  207600 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 18:23:28.721280  207600 ubuntu.go:190] setting up certificates
	I1018 18:23:28.721290  207600 provision.go:84] configureAuth start
	I1018 18:23:28.721349  207600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-213943
	I1018 18:23:28.738348  207600 provision.go:143] copyHostCerts
	I1018 18:23:28.738424  207600 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 18:23:28.738542  207600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 18:23:28.738637  207600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 18:23:28.738765  207600 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 18:23:28.738775  207600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 18:23:28.738804  207600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 18:23:28.738864  207600 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 18:23:28.738872  207600 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 18:23:28.738896  207600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 18:23:28.738959  207600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.embed-certs-213943 san=[127.0.0.1 192.168.85.2 embed-certs-213943 localhost minikube]
	I1018 18:23:29.032199  207600 provision.go:177] copyRemoteCerts
	I1018 18:23:29.032299  207600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 18:23:29.032346  207600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:23:29.049922  207600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa Username:docker}
	I1018 18:23:29.153048  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 18:23:29.170942  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 18:23:29.188389  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1018 18:23:29.206135  207600 provision.go:87] duration metric: took 484.832123ms to configureAuth
	I1018 18:23:29.206159  207600 ubuntu.go:206] setting minikube options for container-runtime
	I1018 18:23:29.206349  207600 config.go:182] Loaded profile config "embed-certs-213943": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:23:29.206456  207600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:23:29.226207  207600 main.go:141] libmachine: Using SSH client type: native
	I1018 18:23:29.226509  207600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1018 18:23:29.226523  207600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 18:23:29.546737  207600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 18:23:29.546758  207600 machine.go:96] duration metric: took 4.349730235s to provisionDockerMachine
	I1018 18:23:29.546769  207600 start.go:293] postStartSetup for "embed-certs-213943" (driver="docker")
	I1018 18:23:29.546780  207600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 18:23:29.546857  207600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 18:23:29.546906  207600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:23:29.575914  207600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa Username:docker}
	I1018 18:23:29.680784  207600 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 18:23:29.683971  207600 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 18:23:29.684002  207600 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 18:23:29.684013  207600 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 18:23:29.684066  207600 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 18:23:29.684148  207600 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 18:23:29.684261  207600 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 18:23:29.691612  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:23:29.708867  207600 start.go:296] duration metric: took 162.071183ms for postStartSetup
	I1018 18:23:29.708973  207600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 18:23:29.709016  207600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:23:29.725660  207600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa Username:docker}
	I1018 18:23:29.826114  207600 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 18:23:29.830971  207600 fix.go:56] duration metric: took 4.981572653s for fixHost
	I1018 18:23:29.830996  207600 start.go:83] releasing machines lock for "embed-certs-213943", held for 4.981621885s
	I1018 18:23:29.831076  207600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-213943
	I1018 18:23:29.848133  207600 ssh_runner.go:195] Run: cat /version.json
	I1018 18:23:29.848188  207600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:23:29.848457  207600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 18:23:29.848508  207600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:23:29.866263  207600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa Username:docker}
	I1018 18:23:29.869026  207600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa Username:docker}
	I1018 18:23:29.969727  207600 ssh_runner.go:195] Run: systemctl --version
	I1018 18:23:30.067036  207600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 18:23:30.107575  207600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 18:23:30.112571  207600 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 18:23:30.112651  207600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 18:23:30.121843  207600 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 18:23:30.121908  207600 start.go:495] detecting cgroup driver to use...
	I1018 18:23:30.121947  207600 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 18:23:30.122016  207600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 18:23:30.138155  207600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 18:23:30.152077  207600 docker.go:218] disabling cri-docker service (if available) ...
	I1018 18:23:30.152166  207600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 18:23:30.168898  207600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 18:23:30.181883  207600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 18:23:30.293824  207600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 18:23:30.416205  207600 docker.go:234] disabling docker service ...
	I1018 18:23:30.416269  207600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 18:23:30.431366  207600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 18:23:30.444641  207600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 18:23:30.561989  207600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 18:23:30.700270  207600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 18:23:30.713230  207600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 18:23:30.730201  207600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 18:23:30.730307  207600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:23:30.741647  207600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 18:23:30.741757  207600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:23:30.750966  207600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:23:30.760165  207600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:23:30.769385  207600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 18:23:30.778084  207600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:23:30.787525  207600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:23:30.796415  207600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:23:30.805661  207600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 18:23:30.813058  207600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 18:23:30.820430  207600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:23:30.946384  207600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 18:23:31.099326  207600 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 18:23:31.099417  207600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 18:23:31.103712  207600 start.go:563] Will wait 60s for crictl version
	I1018 18:23:31.103821  207600 ssh_runner.go:195] Run: which crictl
	I1018 18:23:31.107578  207600 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 18:23:31.136355  207600 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 18:23:31.136508  207600 ssh_runner.go:195] Run: crio --version
	I1018 18:23:31.166012  207600 ssh_runner.go:195] Run: crio --version
	I1018 18:23:31.198710  207600 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 18:23:31.201635  207600 cli_runner.go:164] Run: docker network inspect embed-certs-213943 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:23:31.218863  207600 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 18:23:31.223869  207600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:23:31.233707  207600 kubeadm.go:883] updating cluster {Name:embed-certs-213943 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-213943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 18:23:31.233833  207600 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:23:31.233900  207600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:23:31.271684  207600 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:23:31.271705  207600 crio.go:433] Images already preloaded, skipping extraction
	I1018 18:23:31.271763  207600 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:23:31.300978  207600 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:23:31.301002  207600 cache_images.go:85] Images are preloaded, skipping loading
	I1018 18:23:31.301022  207600 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 18:23:31.301132  207600 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-213943 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-213943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 18:23:31.301217  207600 ssh_runner.go:195] Run: crio config
	I1018 18:23:31.381373  207600 cni.go:84] Creating CNI manager for ""
	I1018 18:23:31.381395  207600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:23:31.381415  207600 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 18:23:31.381438  207600 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-213943 NodeName:embed-certs-213943 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 18:23:31.381564  207600 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-213943"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 18:23:31.381635  207600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 18:23:31.389389  207600 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 18:23:31.389458  207600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 18:23:31.397101  207600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 18:23:31.412825  207600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 18:23:31.429853  207600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1018 18:23:31.443522  207600 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 18:23:31.447278  207600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:23:31.457800  207600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:23:31.589117  207600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:23:31.607623  207600 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943 for IP: 192.168.85.2
	I1018 18:23:31.607700  207600 certs.go:195] generating shared ca certs ...
	I1018 18:23:31.607730  207600 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:23:31.607920  207600 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 18:23:31.607992  207600 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 18:23:31.608014  207600 certs.go:257] generating profile certs ...
	I1018 18:23:31.608130  207600 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/client.key
	I1018 18:23:31.608217  207600 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/apiserver.key.b72dfec4
	I1018 18:23:31.608289  207600 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/proxy-client.key
	I1018 18:23:31.608434  207600 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 18:23:31.608490  207600 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 18:23:31.608518  207600 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 18:23:31.608574  207600 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 18:23:31.608623  207600 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 18:23:31.608687  207600 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 18:23:31.608754  207600 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:23:31.609443  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 18:23:31.634278  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 18:23:31.656855  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 18:23:31.679745  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 18:23:31.708395  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 18:23:31.737622  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 18:23:31.762577  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 18:23:31.786531  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/embed-certs-213943/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 18:23:31.811959  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 18:23:31.834308  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 18:23:31.853365  207600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 18:23:31.872882  207600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 18:23:31.885942  207600 ssh_runner.go:195] Run: openssl version
	I1018 18:23:31.892138  207600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 18:23:31.900280  207600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 18:23:31.903980  207600 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 18:23:31.904066  207600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 18:23:31.946917  207600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 18:23:31.955406  207600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 18:23:31.963808  207600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:23:31.967482  207600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:23:31.967573  207600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:23:32.008880  207600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 18:23:32.017730  207600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 18:23:32.026412  207600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 18:23:32.030712  207600 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 18:23:32.030823  207600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 18:23:32.072118  207600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 18:23:32.080782  207600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 18:23:32.084655  207600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 18:23:32.126495  207600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 18:23:32.167951  207600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 18:23:32.214199  207600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 18:23:32.260663  207600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 18:23:32.319590  207600 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 18:23:32.387077  207600 kubeadm.go:400] StartCluster: {Name:embed-certs-213943 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-213943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:23:32.387232  207600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 18:23:32.387329  207600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 18:23:32.475629  207600 cri.go:89] found id: "97b7723e6cc93259a63a7dc305c6dd7a4974876e6dc283507e6d8ce5af737bcb"
	I1018 18:23:32.475696  207600 cri.go:89] found id: "9ae5471fee776db561d720631098bdc12432bd23b92d88eb2d07deb57fed51ac"
	I1018 18:23:32.475716  207600 cri.go:89] found id: "579b2e90159d3f472f72b4d74cead642311dbb50b6aa56372bed6e44fa5f0026"
	I1018 18:23:32.475735  207600 cri.go:89] found id: "320b2b6a0f723790bef132bc7d46d0c55becfa751e8cd836c15cde5c23b0446d"
	I1018 18:23:32.475767  207600 cri.go:89] found id: ""
	I1018 18:23:32.475848  207600 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 18:23:32.495337  207600 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:23:32Z" level=error msg="open /run/runc: no such file or directory"
	I1018 18:23:32.495510  207600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 18:23:32.509107  207600 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 18:23:32.509176  207600 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 18:23:32.509255  207600 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 18:23:32.521736  207600 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 18:23:32.522438  207600 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-213943" does not appear in /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:23:32.522771  207600 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-2509/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-213943" cluster setting kubeconfig missing "embed-certs-213943" context setting]
	I1018 18:23:32.523391  207600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:23:32.525296  207600 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 18:23:32.536746  207600 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 18:23:32.536830  207600 kubeadm.go:601] duration metric: took 27.634758ms to restartPrimaryControlPlane
	I1018 18:23:32.536855  207600 kubeadm.go:402] duration metric: took 149.786808ms to StartCluster
	I1018 18:23:32.536894  207600 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:23:32.536999  207600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:23:32.538501  207600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:23:32.538953  207600 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 18:23:32.539340  207600 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 18:23:32.539421  207600 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-213943"
	I1018 18:23:32.539435  207600 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-213943"
	W1018 18:23:32.539441  207600 addons.go:247] addon storage-provisioner should already be in state true
	I1018 18:23:32.539464  207600 host.go:66] Checking if "embed-certs-213943" exists ...
	I1018 18:23:32.539931  207600 cli_runner.go:164] Run: docker container inspect embed-certs-213943 --format={{.State.Status}}
	I1018 18:23:32.540247  207600 config.go:182] Loaded profile config "embed-certs-213943": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:23:32.540365  207600 addons.go:69] Setting default-storageclass=true in profile "embed-certs-213943"
	I1018 18:23:32.540436  207600 addons.go:69] Setting dashboard=true in profile "embed-certs-213943"
	I1018 18:23:32.540447  207600 addons.go:238] Setting addon dashboard=true in "embed-certs-213943"
	W1018 18:23:32.540453  207600 addons.go:247] addon dashboard should already be in state true
	I1018 18:23:32.540475  207600 host.go:66] Checking if "embed-certs-213943" exists ...
	I1018 18:23:32.540906  207600 cli_runner.go:164] Run: docker container inspect embed-certs-213943 --format={{.State.Status}}
	I1018 18:23:32.541106  207600 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-213943"
	I1018 18:23:32.541403  207600 cli_runner.go:164] Run: docker container inspect embed-certs-213943 --format={{.State.Status}}
	I1018 18:23:32.549110  207600 out.go:179] * Verifying Kubernetes components...
	I1018 18:23:32.557928  207600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:23:32.581043  207600 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:23:32.584507  207600 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:23:32.584533  207600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 18:23:32.584598  207600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:23:32.606047  207600 addons.go:238] Setting addon default-storageclass=true in "embed-certs-213943"
	W1018 18:23:32.606081  207600 addons.go:247] addon default-storageclass should already be in state true
	I1018 18:23:32.606107  207600 host.go:66] Checking if "embed-certs-213943" exists ...
	I1018 18:23:32.606587  207600 cli_runner.go:164] Run: docker container inspect embed-certs-213943 --format={{.State.Status}}
	I1018 18:23:32.645177  207600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa Username:docker}
	I1018 18:23:32.647325  207600 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 18:23:32.651036  207600 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 18:23:32.651054  207600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 18:23:32.651116  207600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:23:32.655644  207600 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1018 18:23:29.413852  204660 pod_ready.go:104] pod "coredns-66bc5c9577-psj29" is not "Ready", error: <nil>
	W1018 18:23:31.912467  204660 pod_ready.go:104] pod "coredns-66bc5c9577-psj29" is not "Ready", error: <nil>
	I1018 18:23:32.662964  207600 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 18:23:32.662994  207600 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 18:23:32.663070  207600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:23:32.679139  207600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa Username:docker}
	I1018 18:23:32.702430  207600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa Username:docker}
	I1018 18:23:32.851884  207600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:23:32.883758  207600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:23:32.886291  207600 node_ready.go:35] waiting up to 6m0s for node "embed-certs-213943" to be "Ready" ...
	I1018 18:23:32.930301  207600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 18:23:33.076581  207600 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 18:23:33.076608  207600 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 18:23:33.170537  207600 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 18:23:33.170609  207600 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 18:23:33.197964  207600 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 18:23:33.198041  207600 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 18:23:33.222478  207600 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 18:23:33.222500  207600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 18:23:33.242355  207600 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 18:23:33.242380  207600 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 18:23:33.270728  207600 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 18:23:33.270762  207600 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 18:23:33.293397  207600 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 18:23:33.293423  207600 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 18:23:33.315173  207600 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 18:23:33.315193  207600 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 18:23:33.341833  207600 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 18:23:33.341857  207600 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 18:23:33.383539  207600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1018 18:23:34.411662  204660 pod_ready.go:104] pod "coredns-66bc5c9577-psj29" is not "Ready", error: <nil>
	W1018 18:23:36.412895  204660 pod_ready.go:104] pod "coredns-66bc5c9577-psj29" is not "Ready", error: <nil>
	I1018 18:23:37.899940  207600 node_ready.go:49] node "embed-certs-213943" is "Ready"
	I1018 18:23:37.900021  207600 node_ready.go:38] duration metric: took 5.013667626s for node "embed-certs-213943" to be "Ready" ...
	I1018 18:23:37.900056  207600 api_server.go:52] waiting for apiserver process to appear ...
	I1018 18:23:37.900160  207600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 18:23:39.795977  207600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.912139699s)
	I1018 18:23:39.796052  207600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.865718279s)
	I1018 18:23:39.854969  207600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.471386295s)
	I1018 18:23:39.855195  207600 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.955005255s)
	I1018 18:23:39.855234  207600 api_server.go:72] duration metric: took 7.316213602s to wait for apiserver process to appear ...
	I1018 18:23:39.855246  207600 api_server.go:88] waiting for apiserver healthz status ...
	I1018 18:23:39.855264  207600 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 18:23:39.858431  207600 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-213943 addons enable metrics-server
	
	I1018 18:23:39.861259  207600 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1018 18:23:38.414207  204660 pod_ready.go:104] pod "coredns-66bc5c9577-psj29" is not "Ready", error: <nil>
	W1018 18:23:40.911359  204660 pod_ready.go:104] pod "coredns-66bc5c9577-psj29" is not "Ready", error: <nil>
	W1018 18:23:42.916002  204660 pod_ready.go:104] pod "coredns-66bc5c9577-psj29" is not "Ready", error: <nil>
	I1018 18:23:39.864807  207600 addons.go:514] duration metric: took 7.325459308s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1018 18:23:39.865699  207600 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 18:23:39.865725  207600 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 18:23:40.355972  207600 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 18:23:40.364479  207600 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 18:23:40.365602  207600 api_server.go:141] control plane version: v1.34.1
	I1018 18:23:40.365634  207600 api_server.go:131] duration metric: took 510.380063ms to wait for apiserver health ...
	I1018 18:23:40.365643  207600 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 18:23:40.372362  207600 system_pods.go:59] 8 kube-system pods found
	I1018 18:23:40.372404  207600 system_pods.go:61] "coredns-66bc5c9577-grf2z" [0a6125b1-a0eb-4600-9b53-35017d6ee21b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:23:40.372423  207600 system_pods.go:61] "etcd-embed-certs-213943" [8b55657c-393f-48c1-9a5d-6ab96021decb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 18:23:40.372430  207600 system_pods.go:61] "kindnet-44fc8" [b35c637a-9afc-46ee-93dd-89db133869e9] Running
	I1018 18:23:40.372438  207600 system_pods.go:61] "kube-apiserver-embed-certs-213943" [e615020d-5cc5-4e06-8605-21cfcd9b1750] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 18:23:40.372449  207600 system_pods.go:61] "kube-controller-manager-embed-certs-213943" [01383f1b-63a2-47e1-8946-f987e9bcee73] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 18:23:40.372454  207600 system_pods.go:61] "kube-proxy-gcf8n" [0f81c7f5-8e47-4826-bdb3-867782c394a7] Running
	I1018 18:23:40.372467  207600 system_pods.go:61] "kube-scheduler-embed-certs-213943" [216b830a-b447-408c-a3d1-7233624d11a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 18:23:40.372472  207600 system_pods.go:61] "storage-provisioner" [8b4837a6-135d-4719-b80f-0e37d07f3fe4] Running
	I1018 18:23:40.372479  207600 system_pods.go:74] duration metric: took 6.830036ms to wait for pod list to return data ...
	I1018 18:23:40.372498  207600 default_sa.go:34] waiting for default service account to be created ...
	I1018 18:23:40.375449  207600 default_sa.go:45] found service account: "default"
	I1018 18:23:40.375474  207600 default_sa.go:55] duration metric: took 2.963902ms for default service account to be created ...
	I1018 18:23:40.375483  207600 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 18:23:40.379770  207600 system_pods.go:86] 8 kube-system pods found
	I1018 18:23:40.379809  207600 system_pods.go:89] "coredns-66bc5c9577-grf2z" [0a6125b1-a0eb-4600-9b53-35017d6ee21b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:23:40.379819  207600 system_pods.go:89] "etcd-embed-certs-213943" [8b55657c-393f-48c1-9a5d-6ab96021decb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 18:23:40.379824  207600 system_pods.go:89] "kindnet-44fc8" [b35c637a-9afc-46ee-93dd-89db133869e9] Running
	I1018 18:23:40.379831  207600 system_pods.go:89] "kube-apiserver-embed-certs-213943" [e615020d-5cc5-4e06-8605-21cfcd9b1750] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 18:23:40.379838  207600 system_pods.go:89] "kube-controller-manager-embed-certs-213943" [01383f1b-63a2-47e1-8946-f987e9bcee73] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 18:23:40.379842  207600 system_pods.go:89] "kube-proxy-gcf8n" [0f81c7f5-8e47-4826-bdb3-867782c394a7] Running
	I1018 18:23:40.379849  207600 system_pods.go:89] "kube-scheduler-embed-certs-213943" [216b830a-b447-408c-a3d1-7233624d11a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 18:23:40.379853  207600 system_pods.go:89] "storage-provisioner" [8b4837a6-135d-4719-b80f-0e37d07f3fe4] Running
	I1018 18:23:40.379861  207600 system_pods.go:126] duration metric: took 4.372027ms to wait for k8s-apps to be running ...
	I1018 18:23:40.379872  207600 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 18:23:40.379933  207600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:23:40.420301  207600 system_svc.go:56] duration metric: took 40.420654ms WaitForService to wait for kubelet
	I1018 18:23:40.420332  207600 kubeadm.go:586] duration metric: took 7.881311265s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:23:40.420352  207600 node_conditions.go:102] verifying NodePressure condition ...
	I1018 18:23:40.423527  207600 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 18:23:40.423559  207600 node_conditions.go:123] node cpu capacity is 2
	I1018 18:23:40.423572  207600 node_conditions.go:105] duration metric: took 3.21495ms to run NodePressure ...
	I1018 18:23:40.423583  207600 start.go:241] waiting for startup goroutines ...
	I1018 18:23:40.423595  207600 start.go:246] waiting for cluster config update ...
	I1018 18:23:40.423606  207600 start.go:255] writing updated cluster config ...
	I1018 18:23:40.423902  207600 ssh_runner.go:195] Run: rm -f paused
	I1018 18:23:40.429429  207600 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:23:40.471097  207600 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-grf2z" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 18:23:42.477314  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	W1018 18:23:44.478667  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	W1018 18:23:45.412491  204660 pod_ready.go:104] pod "coredns-66bc5c9577-psj29" is not "Ready", error: <nil>
	I1018 18:23:46.915332  204660 pod_ready.go:94] pod "coredns-66bc5c9577-psj29" is "Ready"
	I1018 18:23:46.915420  204660 pod_ready.go:86] duration metric: took 38.009682628s for pod "coredns-66bc5c9577-psj29" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:23:46.924316  204660 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-192562" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:23:46.937598  204660 pod_ready.go:94] pod "etcd-default-k8s-diff-port-192562" is "Ready"
	I1018 18:23:46.937677  204660 pod_ready.go:86] duration metric: took 13.267969ms for pod "etcd-default-k8s-diff-port-192562" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:23:46.946201  204660 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-192562" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:23:46.956804  204660 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-192562" is "Ready"
	I1018 18:23:46.956832  204660 pod_ready.go:86] duration metric: took 10.603697ms for pod "kube-apiserver-default-k8s-diff-port-192562" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:23:46.964692  204660 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-192562" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:23:47.110571  204660 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-192562" is "Ready"
	I1018 18:23:47.110603  204660 pod_ready.go:86] duration metric: took 145.882865ms for pod "kube-controller-manager-default-k8s-diff-port-192562" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:23:47.310686  204660 pod_ready.go:83] waiting for pod "kube-proxy-c7jft" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:23:47.708611  204660 pod_ready.go:94] pod "kube-proxy-c7jft" is "Ready"
	I1018 18:23:47.708641  204660 pod_ready.go:86] duration metric: took 397.929702ms for pod "kube-proxy-c7jft" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:23:47.909940  204660 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-192562" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:23:48.310403  204660 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-192562" is "Ready"
	I1018 18:23:48.310431  204660 pod_ready.go:86] duration metric: took 400.468229ms for pod "kube-scheduler-default-k8s-diff-port-192562" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:23:48.310443  204660 pod_ready.go:40] duration metric: took 39.409982667s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:23:48.395045  204660 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 18:23:48.398026  204660 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-192562" cluster and "default" namespace by default
	W1018 18:23:46.981624  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	W1018 18:23:49.478627  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	W1018 18:23:51.980872  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	W1018 18:23:54.477014  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	W1018 18:23:56.976981  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	W1018 18:23:58.978463  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	W1018 18:24:01.477068  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	W1018 18:24:03.477280  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 18 18:23:47 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:47.853143247Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:23:47 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:47.859557426Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:23:47 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:47.859593603Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:23:47 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:47.85961817Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:23:47 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:47.862765574Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:23:47 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:47.862801792Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:23:47 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:47.86284953Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:23:47 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:47.866198447Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:23:47 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:47.866228617Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:23:47 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:47.866255539Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:23:47 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:47.869351685Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:23:47 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:47.869387928Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:23:56 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:56.997594714Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cccc34d6-9719-4971-b9c1-b6d57d70c151 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:23:56 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:56.999225439Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fb28a74e-820f-4d66-acec-4ead633e1321 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:23:57 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:57.001244033Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz7jc/dashboard-metrics-scraper" id=29e78b38-c785-4b00-8c98-c6e4a50fe7e5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:23:57 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:57.00155136Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:23:57 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:57.011199566Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:23:57 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:57.011807491Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:23:57 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:57.034645479Z" level=info msg="Created container d724bfd66793bad839089c2ac7e4752e48c341e43c36b8084731177c7fea4183: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz7jc/dashboard-metrics-scraper" id=29e78b38-c785-4b00-8c98-c6e4a50fe7e5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:23:57 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:57.035992105Z" level=info msg="Starting container: d724bfd66793bad839089c2ac7e4752e48c341e43c36b8084731177c7fea4183" id=84257cc1-8269-4e32-ac4f-772ca2ae07ae name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:23:57 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:57.038889431Z" level=info msg="Started container" PID=1716 containerID=d724bfd66793bad839089c2ac7e4752e48c341e43c36b8084731177c7fea4183 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz7jc/dashboard-metrics-scraper id=84257cc1-8269-4e32-ac4f-772ca2ae07ae name=/runtime.v1.RuntimeService/StartContainer sandboxID=195e46e24830a56f539cdc92ba914d4cf5b224fbd9f59f2e07a0c3cac7c7318a
	Oct 18 18:23:57 default-k8s-diff-port-192562 conmon[1712]: conmon d724bfd66793bad83908 <ninfo>: container 1716 exited with status 1
	Oct 18 18:23:57 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:57.300174607Z" level=info msg="Removing container: bb0db0af8dc67dd37db129ff5dd70a0a4e0681fa835778fb98bdcf2cc0ac52ee" id=6b3c1ab8-8395-4002-b175-f84c56841466 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 18:23:57 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:57.314210085Z" level=info msg="Error loading conmon cgroup of container bb0db0af8dc67dd37db129ff5dd70a0a4e0681fa835778fb98bdcf2cc0ac52ee: cgroup deleted" id=6b3c1ab8-8395-4002-b175-f84c56841466 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 18:23:57 default-k8s-diff-port-192562 crio[649]: time="2025-10-18T18:23:57.318504401Z" level=info msg="Removed container bb0db0af8dc67dd37db129ff5dd70a0a4e0681fa835778fb98bdcf2cc0ac52ee: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz7jc/dashboard-metrics-scraper" id=6b3c1ab8-8395-4002-b175-f84c56841466 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	d724bfd66793b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago        Exited              dashboard-metrics-scraper   3                   195e46e24830a       dashboard-metrics-scraper-6ffb444bf9-fz7jc             kubernetes-dashboard
	77450e891d0f8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           26 seconds ago       Running             storage-provisioner         2                   4b35db890eb2e       storage-provisioner                                    kube-system
	968df3fa8857b       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   47 seconds ago       Running             kubernetes-dashboard        0                   b9b8de62d3a4d       kubernetes-dashboard-855c9754f9-mq728                  kubernetes-dashboard
	f8bda09d0b7f2       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   a355167ded6a5       busybox                                                default
	6924b85ba570a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   bee241e681d96       coredns-66bc5c9577-psj29                               kube-system
	62679bc7de3d9       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   339402e29fff3       kube-proxy-c7jft                                       kube-system
	79df2c9185f91       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   4b35db890eb2e       storage-provisioner                                    kube-system
	54c91171549a4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   e9846bd4be128       kindnet-6vrvc                                          kube-system
	b95b10640928c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   45c63acb4cd17       kube-scheduler-default-k8s-diff-port-192562            kube-system
	01370573ce275       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   004965c6bdac6       etcd-default-k8s-diff-port-192562                      kube-system
	a40f2fadeda18       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   9ab229171bf67       kube-controller-manager-default-k8s-diff-port-192562   kube-system
	23e7b4f21a923       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   6b6abce6bd8d6       kube-apiserver-default-k8s-diff-port-192562            kube-system
	
	
	==> coredns [6924b85ba570a30f851ea60cbaf6498eaf85975f8883a16972cc0b614db3ae1a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41573 - 11168 "HINFO IN 7582150822066899225.6180216387346826941. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015492901s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-192562
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-192562
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=default-k8s-diff-port-192562
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T18_21_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 18:21:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-192562
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 18:23:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 18:23:37 +0000   Sat, 18 Oct 2025 18:21:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 18:23:37 +0000   Sat, 18 Oct 2025 18:21:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 18:23:37 +0000   Sat, 18 Oct 2025 18:21:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 18:23:37 +0000   Sat, 18 Oct 2025 18:22:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-192562
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                c4581513-26ed-464e-afab-6c98e6b6fd18
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-psj29                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m22s
	  kube-system                 etcd-default-k8s-diff-port-192562                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m27s
	  kube-system                 kindnet-6vrvc                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m22s
	  kube-system                 kube-apiserver-default-k8s-diff-port-192562             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-192562    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-proxy-c7jft                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-scheduler-default-k8s-diff-port-192562             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fz7jc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mq728                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m20s                  kube-proxy       
	  Normal   Starting                 57s                    kube-proxy       
	  Normal   Starting                 2m38s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m38s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m37s (x8 over 2m37s)  kubelet          Node default-k8s-diff-port-192562 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m37s (x8 over 2m37s)  kubelet          Node default-k8s-diff-port-192562 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m37s (x8 over 2m37s)  kubelet          Node default-k8s-diff-port-192562 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m27s                  kubelet          Node default-k8s-diff-port-192562 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m27s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m27s                  kubelet          Node default-k8s-diff-port-192562 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m27s                  kubelet          Node default-k8s-diff-port-192562 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m27s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m23s                  node-controller  Node default-k8s-diff-port-192562 event: Registered Node default-k8s-diff-port-192562 in Controller
	  Normal   NodeReady                101s                   kubelet          Node default-k8s-diff-port-192562 status is now: NodeReady
	  Normal   Starting                 66s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 66s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  65s (x8 over 66s)      kubelet          Node default-k8s-diff-port-192562 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x8 over 66s)      kubelet          Node default-k8s-diff-port-192562 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x8 over 66s)      kubelet          Node default-k8s-diff-port-192562 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node default-k8s-diff-port-192562 event: Registered Node default-k8s-diff-port-192562 in Controller
	
	
	==> dmesg <==
	[Oct18 18:01] overlayfs: idmapped layers are currently not supported
	[Oct18 18:02] overlayfs: idmapped layers are currently not supported
	[Oct18 18:04] overlayfs: idmapped layers are currently not supported
	[ +24.403909] overlayfs: idmapped layers are currently not supported
	[  +6.162774] overlayfs: idmapped layers are currently not supported
	[Oct18 18:05] overlayfs: idmapped layers are currently not supported
	[ +25.128760] overlayfs: idmapped layers are currently not supported
	[Oct18 18:06] overlayfs: idmapped layers are currently not supported
	[Oct18 18:07] overlayfs: idmapped layers are currently not supported
	[Oct18 18:08] overlayfs: idmapped layers are currently not supported
	[Oct18 18:09] overlayfs: idmapped layers are currently not supported
	[Oct18 18:11] overlayfs: idmapped layers are currently not supported
	[Oct18 18:13] overlayfs: idmapped layers are currently not supported
	[ +30.969240] overlayfs: idmapped layers are currently not supported
	[Oct18 18:15] overlayfs: idmapped layers are currently not supported
	[Oct18 18:16] overlayfs: idmapped layers are currently not supported
	[Oct18 18:17] overlayfs: idmapped layers are currently not supported
	[ +23.167826] overlayfs: idmapped layers are currently not supported
	[Oct18 18:18] overlayfs: idmapped layers are currently not supported
	[ +38.509809] overlayfs: idmapped layers are currently not supported
	[Oct18 18:19] overlayfs: idmapped layers are currently not supported
	[Oct18 18:21] overlayfs: idmapped layers are currently not supported
	[Oct18 18:22] overlayfs: idmapped layers are currently not supported
	[Oct18 18:23] overlayfs: idmapped layers are currently not supported
	[ +30.822562] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [01370573ce2751c5a9bc9cf2c5b653ed64758dd3e91f3aec786b7f16d88bf722] <==
	{"level":"warn","ts":"2025-10-18T18:23:04.912256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:04.931896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:04.959441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.001861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.022306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.055538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.085885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.102040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.113971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.133393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.154981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.177373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.201033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.210588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.232518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.269667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.273468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.289470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.304563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.326044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.338277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.382335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.408338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.420677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:05.531225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60566","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:24:06 up  2:06,  0 user,  load average: 2.43, 2.91, 2.73
	Linux default-k8s-diff-port-192562 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [54c91171549a4e5775393d4768d527b1d8e22fd30e268495e4d3b6100ec319a5] <==
	I1018 18:23:07.652927       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 18:23:07.653191       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 18:23:07.653309       1 main.go:148] setting mtu 1500 for CNI 
	I1018 18:23:07.653320       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 18:23:07.653329       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T18:23:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 18:23:07.852722       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 18:23:07.852817       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 18:23:07.852851       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 18:23:07.853691       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 18:23:37.853617       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 18:23:37.853737       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 18:23:37.853831       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 18:23:37.853953       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 18:23:39.353304       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 18:23:39.353348       1 metrics.go:72] Registering metrics
	I1018 18:23:39.353408       1 controller.go:711] "Syncing nftables rules"
	I1018 18:23:47.852791       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 18:23:47.852848       1 main.go:301] handling current node
	I1018 18:23:57.852886       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 18:23:57.852976       1 main.go:301] handling current node
	
	
	==> kube-apiserver [23e7b4f21a923153503f5d9f363c452579100dd2a260750e3b7a35d6ca8dcb22] <==
	I1018 18:23:06.602188       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 18:23:06.602673       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 18:23:06.645984       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 18:23:06.646038       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 18:23:06.646121       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 18:23:06.646150       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 18:23:06.646165       1 policy_source.go:240] refreshing policies
	I1018 18:23:06.646276       1 aggregator.go:171] initial CRD sync complete...
	I1018 18:23:06.646284       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 18:23:06.646290       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 18:23:06.646295       1 cache.go:39] Caches are synced for autoregister controller
	I1018 18:23:06.717847       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 18:23:06.719164       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1018 18:23:06.802260       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 18:23:07.029552       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 18:23:07.146967       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 18:23:07.619728       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 18:23:07.768757       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 18:23:07.838475       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 18:23:07.861987       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 18:23:07.999396       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.29.30"}
	I1018 18:23:08.098412       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.7.198"}
	I1018 18:23:10.949767       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 18:23:11.007868       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 18:23:11.239758       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [a40f2fadeda1857088554cfe73930b819e69cca05e8a65552a5d8d7bb7b5946d] <==
	I1018 18:23:10.739397       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 18:23:10.739501       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 18:23:10.739546       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 18:23:10.739567       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 18:23:10.739576       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 18:23:10.740246       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 18:23:10.740316       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 18:23:10.741382       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 18:23:10.745281       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 18:23:10.745815       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 18:23:10.748201       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 18:23:10.748424       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 18:23:10.751537       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 18:23:10.755626       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 18:23:10.755763       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 18:23:10.757952       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 18:23:10.758082       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 18:23:10.759009       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 18:23:10.760109       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 18:23:10.774193       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 18:23:10.792642       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 18:23:10.792748       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 18:23:10.792780       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 18:23:10.799461       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 18:23:11.277811       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [62679bc7de3d96ea22f1ec1fe03d9713354a7daeddddb6d200b8b0232b3a9220] <==
	I1018 18:23:08.125135       1 server_linux.go:53] "Using iptables proxy"
	I1018 18:23:08.317705       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 18:23:08.477259       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 18:23:08.477293       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 18:23:08.477363       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 18:23:08.496467       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 18:23:08.496522       1 server_linux.go:132] "Using iptables Proxier"
	I1018 18:23:08.501263       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 18:23:08.501585       1 server.go:527] "Version info" version="v1.34.1"
	I1018 18:23:08.501611       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:23:08.505903       1 config.go:106] "Starting endpoint slice config controller"
	I1018 18:23:08.505985       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 18:23:08.506192       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 18:23:08.506212       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 18:23:08.506354       1 config.go:200] "Starting service config controller"
	I1018 18:23:08.506399       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 18:23:08.506455       1 config.go:309] "Starting node config controller"
	I1018 18:23:08.506483       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 18:23:08.606403       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 18:23:08.606411       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 18:23:08.606441       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 18:23:08.606566       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [b95b10640928c3fde63ab1dc3d1f20d7b8532a3c6bb09b5b79dd506a1cada9c2] <==
	I1018 18:23:04.597423       1 serving.go:386] Generated self-signed cert in-memory
	I1018 18:23:08.323621       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 18:23:08.324363       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:23:08.340055       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 18:23:08.340213       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 18:23:08.340276       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 18:23:08.340348       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 18:23:08.345141       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:23:08.345255       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:23:08.347051       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:23:08.347086       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:23:08.441002       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 18:23:08.447366       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:23:08.447387       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 18:23:13 default-k8s-diff-port-192562 kubelet[778]: W1018 18:23:13.426252     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c0a8933c552c9d4e5fb4ca01ca33c573463079ebfb6960b8ac96dc752d5faeaa/crio-195e46e24830a56f539cdc92ba914d4cf5b224fbd9f59f2e07a0c3cac7c7318a WatchSource:0}: Error finding container 195e46e24830a56f539cdc92ba914d4cf5b224fbd9f59f2e07a0c3cac7c7318a: Status 404 returned error can't find the container with id 195e46e24830a56f539cdc92ba914d4cf5b224fbd9f59f2e07a0c3cac7c7318a
	Oct 18 18:23:16 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:16.641794     778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 18:23:19 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:19.425850     778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mq728" podStartSLOduration=3.685476181 podStartE2EDuration="8.425830386s" podCreationTimestamp="2025-10-18 18:23:11 +0000 UTC" firstStartedPulling="2025-10-18 18:23:13.402222776 +0000 UTC m=+12.628962176" lastFinishedPulling="2025-10-18 18:23:18.142576989 +0000 UTC m=+17.369316381" observedRunningTime="2025-10-18 18:23:19.196790706 +0000 UTC m=+18.423530098" watchObservedRunningTime="2025-10-18 18:23:19.425830386 +0000 UTC m=+18.652569786"
	Oct 18 18:23:22 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:22.193863     778 scope.go:117] "RemoveContainer" containerID="eb0020991d91eb03b8211c773249d54bee83a9ac859a085c3c434ef1176f4c67"
	Oct 18 18:23:23 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:23.198645     778 scope.go:117] "RemoveContainer" containerID="eb0020991d91eb03b8211c773249d54bee83a9ac859a085c3c434ef1176f4c67"
	Oct 18 18:23:23 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:23.199254     778 scope.go:117] "RemoveContainer" containerID="b24d0dc1552365fa992da757e3533244a7782f9a010075b9e45e48fca1f40699"
	Oct 18 18:23:23 default-k8s-diff-port-192562 kubelet[778]: E1018 18:23:23.199503     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fz7jc_kubernetes-dashboard(b59a85df-7bc0-4f24-b7c2-4214bd847824)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz7jc" podUID="b59a85df-7bc0-4f24-b7c2-4214bd847824"
	Oct 18 18:23:24 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:24.203088     778 scope.go:117] "RemoveContainer" containerID="b24d0dc1552365fa992da757e3533244a7782f9a010075b9e45e48fca1f40699"
	Oct 18 18:23:24 default-k8s-diff-port-192562 kubelet[778]: E1018 18:23:24.203258     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fz7jc_kubernetes-dashboard(b59a85df-7bc0-4f24-b7c2-4214bd847824)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz7jc" podUID="b59a85df-7bc0-4f24-b7c2-4214bd847824"
	Oct 18 18:23:25 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:25.206389     778 scope.go:117] "RemoveContainer" containerID="b24d0dc1552365fa992da757e3533244a7782f9a010075b9e45e48fca1f40699"
	Oct 18 18:23:25 default-k8s-diff-port-192562 kubelet[778]: E1018 18:23:25.206573     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fz7jc_kubernetes-dashboard(b59a85df-7bc0-4f24-b7c2-4214bd847824)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz7jc" podUID="b59a85df-7bc0-4f24-b7c2-4214bd847824"
	Oct 18 18:23:35 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:35.996179     778 scope.go:117] "RemoveContainer" containerID="b24d0dc1552365fa992da757e3533244a7782f9a010075b9e45e48fca1f40699"
	Oct 18 18:23:36 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:36.234464     778 scope.go:117] "RemoveContainer" containerID="b24d0dc1552365fa992da757e3533244a7782f9a010075b9e45e48fca1f40699"
	Oct 18 18:23:36 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:36.234762     778 scope.go:117] "RemoveContainer" containerID="bb0db0af8dc67dd37db129ff5dd70a0a4e0681fa835778fb98bdcf2cc0ac52ee"
	Oct 18 18:23:36 default-k8s-diff-port-192562 kubelet[778]: E1018 18:23:36.234930     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fz7jc_kubernetes-dashboard(b59a85df-7bc0-4f24-b7c2-4214bd847824)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz7jc" podUID="b59a85df-7bc0-4f24-b7c2-4214bd847824"
	Oct 18 18:23:39 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:39.244317     778 scope.go:117] "RemoveContainer" containerID="79df2c9185f91fe68153976c279dd9aaa7775b92571473ea614f468db51721de"
	Oct 18 18:23:43 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:43.388037     778 scope.go:117] "RemoveContainer" containerID="bb0db0af8dc67dd37db129ff5dd70a0a4e0681fa835778fb98bdcf2cc0ac52ee"
	Oct 18 18:23:43 default-k8s-diff-port-192562 kubelet[778]: E1018 18:23:43.390114     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fz7jc_kubernetes-dashboard(b59a85df-7bc0-4f24-b7c2-4214bd847824)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz7jc" podUID="b59a85df-7bc0-4f24-b7c2-4214bd847824"
	Oct 18 18:23:56 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:56.996895     778 scope.go:117] "RemoveContainer" containerID="bb0db0af8dc67dd37db129ff5dd70a0a4e0681fa835778fb98bdcf2cc0ac52ee"
	Oct 18 18:23:57 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:57.294981     778 scope.go:117] "RemoveContainer" containerID="bb0db0af8dc67dd37db129ff5dd70a0a4e0681fa835778fb98bdcf2cc0ac52ee"
	Oct 18 18:23:57 default-k8s-diff-port-192562 kubelet[778]: I1018 18:23:57.295276     778 scope.go:117] "RemoveContainer" containerID="d724bfd66793bad839089c2ac7e4752e48c341e43c36b8084731177c7fea4183"
	Oct 18 18:23:57 default-k8s-diff-port-192562 kubelet[778]: E1018 18:23:57.295431     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fz7jc_kubernetes-dashboard(b59a85df-7bc0-4f24-b7c2-4214bd847824)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fz7jc" podUID="b59a85df-7bc0-4f24-b7c2-4214bd847824"
	Oct 18 18:24:01 default-k8s-diff-port-192562 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 18:24:01 default-k8s-diff-port-192562 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 18:24:01 default-k8s-diff-port-192562 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [968df3fa8857ba03f182ffe49abd49d62aa437f6426d77631a03400f7324c070] <==
	2025/10/18 18:23:18 Using namespace: kubernetes-dashboard
	2025/10/18 18:23:18 Using in-cluster config to connect to apiserver
	2025/10/18 18:23:18 Using secret token for csrf signing
	2025/10/18 18:23:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 18:23:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 18:23:18 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 18:23:18 Generating JWE encryption key
	2025/10/18 18:23:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 18:23:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 18:23:18 Initializing JWE encryption key from synchronized object
	2025/10/18 18:23:18 Creating in-cluster Sidecar client
	2025/10/18 18:23:18 Serving insecurely on HTTP port: 9090
	2025/10/18 18:23:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 18:23:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 18:23:18 Starting overwatch
	
	
	==> storage-provisioner [77450e891d0f89a28fe61fe538b628c44c0a3acdc00441daeaf6962e3dc60913] <==
	I1018 18:23:39.389896       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 18:23:39.415169       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 18:23:39.415291       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 18:23:39.421920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:42.878210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:47.139639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:50.738518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:53.792544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:56.814969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:56.820005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 18:23:56.820231       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 18:23:56.820837       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-192562_bfbe0c44-e6b6-443e-9a0c-55914d6c49b1!
	I1018 18:23:56.820699       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca1be163-b867-4e80-ab6a-bbe296c21eb5", APIVersion:"v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-192562_bfbe0c44-e6b6-443e-9a0c-55914d6c49b1 became leader
	W1018 18:23:56.825586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:56.828777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 18:23:56.922105       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-192562_bfbe0c44-e6b6-443e-9a0c-55914d6c49b1!
	W1018 18:23:58.831689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:23:58.836658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:00.840499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:00.847499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:02.851511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:02.857403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:04.861412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:04.866451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [79df2c9185f91fe68153976c279dd9aaa7775b92571473ea614f468db51721de] <==
	I1018 18:23:08.204784       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 18:23:38.248832       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-192562 -n default-k8s-diff-port-192562
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-192562 -n default-k8s-diff-port-192562: exit status 2 (388.698751ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-192562 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-213943 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-213943 --alsologtostderr -v=1: exit status 80 (2.183624521s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-213943 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 18:24:34.541352  213684 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:24:34.541667  213684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:24:34.541709  213684 out.go:374] Setting ErrFile to fd 2...
	I1018 18:24:34.541729  213684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:24:34.542149  213684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:24:34.542544  213684 out.go:368] Setting JSON to false
	I1018 18:24:34.542614  213684 mustload.go:65] Loading cluster: embed-certs-213943
	I1018 18:24:34.543176  213684 config.go:182] Loaded profile config "embed-certs-213943": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:24:34.543811  213684 cli_runner.go:164] Run: docker container inspect embed-certs-213943 --format={{.State.Status}}
	I1018 18:24:34.575139  213684 host.go:66] Checking if "embed-certs-213943" exists ...
	I1018 18:24:34.575540  213684 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:24:34.669779  213684 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:78 SystemTime:2025-10-18 18:24:34.657918178 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:24:34.670422  213684 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-213943 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 18:24:34.676504  213684 out.go:179] * Pausing node embed-certs-213943 ... 
	I1018 18:24:34.680684  213684 host.go:66] Checking if "embed-certs-213943" exists ...
	I1018 18:24:34.681087  213684 ssh_runner.go:195] Run: systemctl --version
	I1018 18:24:34.681135  213684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-213943
	I1018 18:24:34.701724  213684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/embed-certs-213943/id_rsa Username:docker}
	I1018 18:24:34.817116  213684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:24:34.839526  213684 pause.go:52] kubelet running: true
	I1018 18:24:34.839604  213684 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 18:24:35.150436  213684 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 18:24:35.150523  213684 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 18:24:35.251584  213684 cri.go:89] found id: "6ea0fc669c5a5aed268bc4f1b1959ec658c78291c197f49575e209481a5d2d96"
	I1018 18:24:35.251600  213684 cri.go:89] found id: "c7e17397787390cfe2e365edc60882b35fef038d500e72ed7964bce1242d4793"
	I1018 18:24:35.251604  213684 cri.go:89] found id: "630e3f457293e1639be23c9cecc27705318c350d2ca0ae9fa75f375bfdf573c8"
	I1018 18:24:35.251608  213684 cri.go:89] found id: "16aec3adc07fffcd5545d9bd12ca76fc45c9f92f49291dbfa7eb00de6d54c0ac"
	I1018 18:24:35.251611  213684 cri.go:89] found id: "39d6593c6d8d54c71c1c11426effcafa05b750b8b4e8c8f61eccd2fde32ca8ec"
	I1018 18:24:35.251614  213684 cri.go:89] found id: "97b7723e6cc93259a63a7dc305c6dd7a4974876e6dc283507e6d8ce5af737bcb"
	I1018 18:24:35.251618  213684 cri.go:89] found id: "9ae5471fee776db561d720631098bdc12432bd23b92d88eb2d07deb57fed51ac"
	I1018 18:24:35.251621  213684 cri.go:89] found id: "579b2e90159d3f472f72b4d74cead642311dbb50b6aa56372bed6e44fa5f0026"
	I1018 18:24:35.251624  213684 cri.go:89] found id: "320b2b6a0f723790bef132bc7d46d0c55becfa751e8cd836c15cde5c23b0446d"
	I1018 18:24:35.251630  213684 cri.go:89] found id: "b1cbdd377acf4ce0ba012efbe8a92d490f1cf26de33b65c0792311ca69b2f97d"
	I1018 18:24:35.251633  213684 cri.go:89] found id: "e9de5e570569bf04ee4708a292c5a4963413811ea3989c2d9d52ea34af3ed27e"
	I1018 18:24:35.251636  213684 cri.go:89] found id: ""
	I1018 18:24:35.251673  213684 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 18:24:35.270273  213684 retry.go:31] will retry after 334.160479ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:24:35Z" level=error msg="open /run/runc: no such file or directory"
	I1018 18:24:35.604662  213684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:24:35.622369  213684 pause.go:52] kubelet running: false
	I1018 18:24:35.622432  213684 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 18:24:35.855471  213684 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 18:24:35.855651  213684 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 18:24:35.982304  213684 cri.go:89] found id: "6ea0fc669c5a5aed268bc4f1b1959ec658c78291c197f49575e209481a5d2d96"
	I1018 18:24:35.982374  213684 cri.go:89] found id: "c7e17397787390cfe2e365edc60882b35fef038d500e72ed7964bce1242d4793"
	I1018 18:24:35.982392  213684 cri.go:89] found id: "630e3f457293e1639be23c9cecc27705318c350d2ca0ae9fa75f375bfdf573c8"
	I1018 18:24:35.982409  213684 cri.go:89] found id: "16aec3adc07fffcd5545d9bd12ca76fc45c9f92f49291dbfa7eb00de6d54c0ac"
	I1018 18:24:35.982426  213684 cri.go:89] found id: "39d6593c6d8d54c71c1c11426effcafa05b750b8b4e8c8f61eccd2fde32ca8ec"
	I1018 18:24:35.982458  213684 cri.go:89] found id: "97b7723e6cc93259a63a7dc305c6dd7a4974876e6dc283507e6d8ce5af737bcb"
	I1018 18:24:35.982473  213684 cri.go:89] found id: "9ae5471fee776db561d720631098bdc12432bd23b92d88eb2d07deb57fed51ac"
	I1018 18:24:35.982491  213684 cri.go:89] found id: "579b2e90159d3f472f72b4d74cead642311dbb50b6aa56372bed6e44fa5f0026"
	I1018 18:24:35.982523  213684 cri.go:89] found id: "320b2b6a0f723790bef132bc7d46d0c55becfa751e8cd836c15cde5c23b0446d"
	I1018 18:24:35.982546  213684 cri.go:89] found id: "b1cbdd377acf4ce0ba012efbe8a92d490f1cf26de33b65c0792311ca69b2f97d"
	I1018 18:24:35.982575  213684 cri.go:89] found id: "e9de5e570569bf04ee4708a292c5a4963413811ea3989c2d9d52ea34af3ed27e"
	I1018 18:24:35.982605  213684 cri.go:89] found id: ""
	I1018 18:24:35.982693  213684 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 18:24:35.999507  213684 retry.go:31] will retry after 279.042789ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:24:35Z" level=error msg="open /run/runc: no such file or directory"
	I1018 18:24:36.278809  213684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:24:36.293112  213684 pause.go:52] kubelet running: false
	I1018 18:24:36.293229  213684 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 18:24:36.498283  213684 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 18:24:36.498378  213684 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 18:24:36.585353  213684 cri.go:89] found id: "6ea0fc669c5a5aed268bc4f1b1959ec658c78291c197f49575e209481a5d2d96"
	I1018 18:24:36.585378  213684 cri.go:89] found id: "c7e17397787390cfe2e365edc60882b35fef038d500e72ed7964bce1242d4793"
	I1018 18:24:36.585384  213684 cri.go:89] found id: "630e3f457293e1639be23c9cecc27705318c350d2ca0ae9fa75f375bfdf573c8"
	I1018 18:24:36.585388  213684 cri.go:89] found id: "16aec3adc07fffcd5545d9bd12ca76fc45c9f92f49291dbfa7eb00de6d54c0ac"
	I1018 18:24:36.585392  213684 cri.go:89] found id: "39d6593c6d8d54c71c1c11426effcafa05b750b8b4e8c8f61eccd2fde32ca8ec"
	I1018 18:24:36.585395  213684 cri.go:89] found id: "97b7723e6cc93259a63a7dc305c6dd7a4974876e6dc283507e6d8ce5af737bcb"
	I1018 18:24:36.585419  213684 cri.go:89] found id: "9ae5471fee776db561d720631098bdc12432bd23b92d88eb2d07deb57fed51ac"
	I1018 18:24:36.585436  213684 cri.go:89] found id: "579b2e90159d3f472f72b4d74cead642311dbb50b6aa56372bed6e44fa5f0026"
	I1018 18:24:36.585446  213684 cri.go:89] found id: "320b2b6a0f723790bef132bc7d46d0c55becfa751e8cd836c15cde5c23b0446d"
	I1018 18:24:36.585456  213684 cri.go:89] found id: "b1cbdd377acf4ce0ba012efbe8a92d490f1cf26de33b65c0792311ca69b2f97d"
	I1018 18:24:36.585464  213684 cri.go:89] found id: "e9de5e570569bf04ee4708a292c5a4963413811ea3989c2d9d52ea34af3ed27e"
	I1018 18:24:36.585467  213684 cri.go:89] found id: ""
	I1018 18:24:36.585530  213684 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 18:24:36.601237  213684 out.go:203] 
	W1018 18:24:36.604522  213684 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:24:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:24:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 18:24:36.604548  213684 out.go:285] * 
	* 
	W1018 18:24:36.610210  213684 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 18:24:36.613908  213684 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-213943 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-213943
helpers_test.go:243: (dbg) docker inspect embed-certs-213943:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6",
	        "Created": "2025-10-18T18:21:41.10994787Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 207729,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T18:23:24.901773449Z",
	            "FinishedAt": "2025-10-18T18:23:23.346386885Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6/hosts",
	        "LogPath": "/var/lib/docker/containers/f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6/f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6-json.log",
	        "Name": "/embed-certs-213943",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-213943:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-213943",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6",
	                "LowerDir": "/var/lib/docker/overlay2/5ae3bc0eef02b15432a8f6a5068c9db91f9b4ede8c0e696a3d1cf388220bd2a0-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5ae3bc0eef02b15432a8f6a5068c9db91f9b4ede8c0e696a3d1cf388220bd2a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5ae3bc0eef02b15432a8f6a5068c9db91f9b4ede8c0e696a3d1cf388220bd2a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5ae3bc0eef02b15432a8f6a5068c9db91f9b4ede8c0e696a3d1cf388220bd2a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-213943",
	                "Source": "/var/lib/docker/volumes/embed-certs-213943/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-213943",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-213943",
	                "name.minikube.sigs.k8s.io": "embed-certs-213943",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "292c25d22c4d9e11f46faa2ed367e503eb41b676995716fa90e11979d4b0c620",
	            "SandboxKey": "/var/run/docker/netns/292c25d22c4d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-213943": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:2b:3c:2a:0b:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "efe92dc8c8166df0c3008dadfb93e08ef35b4f9b392d6a8aee91eaee89568b86",
	                    "EndpointID": "8afb40a859bca5f3ddae67dcdb5e5c6065e66e48ead1cf82cb0cab54eeff0b2a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-213943",
	                        "f6d884df9095"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-213943 -n embed-certs-213943
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-213943 -n embed-certs-213943: exit status 2 (452.190935ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-213943 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-213943 logs -n 25: (1.688845136s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-918475 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │ 18 Oct 25 18:20 UTC │
	│ image   │ old-k8s-version-918475 image list --format=json                                                                                                                                                                                               │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:20 UTC │ 18 Oct 25 18:20 UTC │
	│ pause   │ -p old-k8s-version-918475 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:20 UTC │                     │
	│ delete  │ -p old-k8s-version-918475                                                                                                                                                                                                                     │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:20 UTC │ 18 Oct 25 18:21 UTC │
	│ start   │ -p cert-expiration-463770 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-463770       │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:21 UTC │
	│ delete  │ -p old-k8s-version-918475                                                                                                                                                                                                                     │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:21 UTC │
	│ start   │ -p default-k8s-diff-port-192562 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:22 UTC │
	│ delete  │ -p cert-expiration-463770                                                                                                                                                                                                                     │ cert-expiration-463770       │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:21 UTC │
	│ start   │ -p embed-certs-213943 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-192562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-192562 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │ 18 Oct 25 18:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-192562 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │ 18 Oct 25 18:22 UTC │
	│ start   │ -p default-k8s-diff-port-192562 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │ 18 Oct 25 18:23 UTC │
	│ addons  │ enable metrics-server -p embed-certs-213943 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │                     │
	│ stop    │ -p embed-certs-213943 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │ 18 Oct 25 18:23 UTC │
	│ addons  │ enable dashboard -p embed-certs-213943 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │ 18 Oct 25 18:23 UTC │
	│ start   │ -p embed-certs-213943 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │ 18 Oct 25 18:24 UTC │
	│ image   │ default-k8s-diff-port-192562 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ pause   │ -p default-k8s-diff-port-192562 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-192562                                                                                                                                                                                                               │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ delete  │ -p default-k8s-diff-port-192562                                                                                                                                                                                                               │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ delete  │ -p disable-driver-mounts-747178                                                                                                                                                                                                               │ disable-driver-mounts-747178 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ start   │ -p no-preload-729957 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │                     │
	│ image   │ embed-certs-213943 image list --format=json                                                                                                                                                                                                   │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ pause   │ -p embed-certs-213943 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 18:24:11
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 18:24:11.168077  211246 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:24:11.168215  211246 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:24:11.168229  211246 out.go:374] Setting ErrFile to fd 2...
	I1018 18:24:11.168589  211246 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:24:11.169047  211246 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:24:11.169905  211246 out.go:368] Setting JSON to false
	I1018 18:24:11.171376  211246 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7601,"bootTime":1760804251,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 18:24:11.171505  211246 start.go:141] virtualization:  
	I1018 18:24:11.175567  211246 out.go:179] * [no-preload-729957] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 18:24:11.178810  211246 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 18:24:11.178962  211246 notify.go:220] Checking for updates...
	I1018 18:24:11.184816  211246 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 18:24:11.188052  211246 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:24:11.191101  211246 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 18:24:11.193984  211246 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 18:24:11.197079  211246 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 18:24:11.200515  211246 config.go:182] Loaded profile config "embed-certs-213943": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:24:11.200670  211246 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 18:24:11.229043  211246 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 18:24:11.229219  211246 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:24:11.296651  211246 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 18:24:11.280863285 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:24:11.296769  211246 docker.go:318] overlay module found
	I1018 18:24:11.300145  211246 out.go:179] * Using the docker driver based on user configuration
	I1018 18:24:11.303173  211246 start.go:305] selected driver: docker
	I1018 18:24:11.303208  211246 start.go:925] validating driver "docker" against <nil>
	I1018 18:24:11.303223  211246 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 18:24:11.303929  211246 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:24:11.358700  211246 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 18:24:11.349159169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:24:11.358864  211246 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 18:24:11.359879  211246 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:24:11.362894  211246 out.go:179] * Using Docker driver with root privileges
	I1018 18:24:11.365765  211246 cni.go:84] Creating CNI manager for ""
	I1018 18:24:11.365840  211246 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:24:11.365852  211246 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 18:24:11.365942  211246 start.go:349] cluster config:
	{Name:no-preload-729957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-729957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:24:11.369098  211246 out.go:179] * Starting "no-preload-729957" primary control-plane node in "no-preload-729957" cluster
	I1018 18:24:11.371934  211246 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 18:24:11.374878  211246 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 18:24:11.377933  211246 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:24:11.378024  211246 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 18:24:11.378073  211246 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/config.json ...
	I1018 18:24:11.378103  211246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/config.json: {Name:mk28e9d6cea09f76141683dde674f4cd54d76e44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:11.378340  211246 cache.go:107] acquiring lock: {Name:mkfe0c95c3696c6ee6d6bee7d1ad713b9bd021b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:24:11.378407  211246 cache.go:115] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 18:24:11.378419  211246 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 88.715µs
	I1018 18:24:11.378432  211246 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 18:24:11.378445  211246 cache.go:107] acquiring lock: {Name:mkd26b3798aaf66fcad945e0c1a60f0824366e40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:24:11.378517  211246 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 18:24:11.378856  211246 cache.go:107] acquiring lock: {Name:mkd3282648be7d83ac0e67296042440acb53052b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:24:11.378957  211246 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 18:24:11.379199  211246 cache.go:107] acquiring lock: {Name:mk6a37c53550d30a6c5a6027e63e35937896f954 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:24:11.379302  211246 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 18:24:11.379538  211246 cache.go:107] acquiring lock: {Name:mk2fda38822643b1c863eb02b4b58b1c8beea2d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:24:11.379648  211246 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 18:24:11.379896  211246 cache.go:107] acquiring lock: {Name:mk3a776414901f1896d41bf7105926b8db2f104a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:24:11.380019  211246 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1018 18:24:11.380272  211246 cache.go:107] acquiring lock: {Name:mka02bf3e7fa031efb5dd0162aedd881c5c29af2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:24:11.380413  211246 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1018 18:24:11.380670  211246 cache.go:107] acquiring lock: {Name:mke59697c6719748ff18c4e99b2595c9da08adaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:24:11.380879  211246 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 18:24:11.383769  211246 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 18:24:11.384225  211246 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 18:24:11.384405  211246 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1018 18:24:11.384535  211246 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 18:24:11.384806  211246 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 18:24:11.385453  211246 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1018 18:24:11.385923  211246 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 18:24:11.404377  211246 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 18:24:11.404400  211246 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 18:24:11.404418  211246 cache.go:232] Successfully downloaded all kic artifacts
	I1018 18:24:11.404441  211246 start.go:360] acquireMachinesLock for no-preload-729957: {Name:mke750361707948cde27a747cd8852fabeab5692 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:24:11.404541  211246 start.go:364] duration metric: took 85.179µs to acquireMachinesLock for "no-preload-729957"
	I1018 18:24:11.404571  211246 start.go:93] Provisioning new machine with config: &{Name:no-preload-729957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-729957 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 18:24:11.404657  211246 start.go:125] createHost starting for "" (driver="docker")
	W1018 18:24:10.477378  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	W1018 18:24:12.477864  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	I1018 18:24:11.408406  211246 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 18:24:11.408705  211246 start.go:159] libmachine.API.Create for "no-preload-729957" (driver="docker")
	I1018 18:24:11.408750  211246 client.go:168] LocalClient.Create starting
	I1018 18:24:11.408828  211246 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem
	I1018 18:24:11.408875  211246 main.go:141] libmachine: Decoding PEM data...
	I1018 18:24:11.408912  211246 main.go:141] libmachine: Parsing certificate...
	I1018 18:24:11.409042  211246 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem
	I1018 18:24:11.409076  211246 main.go:141] libmachine: Decoding PEM data...
	I1018 18:24:11.409097  211246 main.go:141] libmachine: Parsing certificate...
	I1018 18:24:11.409521  211246 cli_runner.go:164] Run: docker network inspect no-preload-729957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 18:24:11.437735  211246 cli_runner.go:211] docker network inspect no-preload-729957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 18:24:11.437811  211246 network_create.go:284] running [docker network inspect no-preload-729957] to gather additional debugging logs...
	I1018 18:24:11.437834  211246 cli_runner.go:164] Run: docker network inspect no-preload-729957
	W1018 18:24:11.454585  211246 cli_runner.go:211] docker network inspect no-preload-729957 returned with exit code 1
	I1018 18:24:11.454614  211246 network_create.go:287] error running [docker network inspect no-preload-729957]: docker network inspect no-preload-729957: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-729957 not found
	I1018 18:24:11.454629  211246 network_create.go:289] output of [docker network inspect no-preload-729957]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-729957 not found
	
	** /stderr **
	I1018 18:24:11.454726  211246 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:24:11.470946  211246 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-903568cdf824 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:7a:80:c0:8c:ed} reservation:<nil>}
	I1018 18:24:11.471278  211246 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ee9fcaab9ca8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:a7:65:1b:c0:41} reservation:<nil>}
	I1018 18:24:11.471601  211246 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-414fc11e154b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:86:f0:a8:1a:86:00} reservation:<nil>}
	I1018 18:24:11.473509  211246 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c64100}
	I1018 18:24:11.473539  211246 network_create.go:124] attempt to create docker network no-preload-729957 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1018 18:24:11.473595  211246 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-729957 no-preload-729957
	I1018 18:24:11.546112  211246 network_create.go:108] docker network no-preload-729957 192.168.76.0/24 created
	I1018 18:24:11.546146  211246 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-729957" container
	I1018 18:24:11.546234  211246 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 18:24:11.567452  211246 cli_runner.go:164] Run: docker volume create no-preload-729957 --label name.minikube.sigs.k8s.io=no-preload-729957 --label created_by.minikube.sigs.k8s.io=true
	I1018 18:24:11.586702  211246 oci.go:103] Successfully created a docker volume no-preload-729957
	I1018 18:24:11.586798  211246 cli_runner.go:164] Run: docker run --rm --name no-preload-729957-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-729957 --entrypoint /usr/bin/test -v no-preload-729957:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 18:24:11.709734  211246 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1018 18:24:11.746271  211246 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1018 18:24:11.749272  211246 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1018 18:24:11.757327  211246 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1018 18:24:11.758631  211246 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1018 18:24:11.772040  211246 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1018 18:24:11.792103  211246 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1018 18:24:11.803795  211246 cache.go:157] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1018 18:24:11.803825  211246 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 423.930056ms
	I1018 18:24:11.803838  211246 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 18:24:12.230277  211246 oci.go:107] Successfully prepared a docker volume no-preload-729957
	I1018 18:24:12.230322  211246 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1018 18:24:12.230452  211246 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 18:24:12.230567  211246 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 18:24:12.258173  211246 cache.go:157] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 18:24:12.258199  211246 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 878.663764ms
	I1018 18:24:12.258213  211246 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 18:24:12.303433  211246 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-729957 --name no-preload-729957 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-729957 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-729957 --network no-preload-729957 --ip 192.168.76.2 --volume no-preload-729957:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 18:24:12.711996  211246 cache.go:157] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 18:24:12.712030  211246 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.332834708s
	I1018 18:24:12.712043  211246 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 18:24:12.749221  211246 cache.go:157] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 18:24:12.749290  211246 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.370436906s
	I1018 18:24:12.749318  211246 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 18:24:12.781266  211246 cli_runner.go:164] Run: docker container inspect no-preload-729957 --format={{.State.Running}}
	I1018 18:24:12.803112  211246 cache.go:157] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 18:24:12.803187  211246 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.422519298s
	I1018 18:24:12.803214  211246 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 18:24:12.836730  211246 cli_runner.go:164] Run: docker container inspect no-preload-729957 --format={{.State.Status}}
	I1018 18:24:12.857442  211246 cache.go:157] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 18:24:12.857515  211246 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.479068159s
	I1018 18:24:12.857544  211246 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 18:24:12.906521  211246 cli_runner.go:164] Run: docker exec no-preload-729957 stat /var/lib/dpkg/alternatives/iptables
	I1018 18:24:12.987387  211246 oci.go:144] the created container "no-preload-729957" has a running status.
	I1018 18:24:12.987457  211246 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa...
	I1018 18:24:13.720510  211246 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 18:24:13.744148  211246 cli_runner.go:164] Run: docker container inspect no-preload-729957 --format={{.State.Status}}
	I1018 18:24:13.780666  211246 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 18:24:13.780741  211246 kic_runner.go:114] Args: [docker exec --privileged no-preload-729957 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 18:24:13.850033  211246 cli_runner.go:164] Run: docker container inspect no-preload-729957 --format={{.State.Status}}
	I1018 18:24:13.877052  211246 machine.go:93] provisionDockerMachine start ...
	I1018 18:24:13.877147  211246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:24:13.917220  211246 main.go:141] libmachine: Using SSH client type: native
	I1018 18:24:13.917612  211246 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1018 18:24:13.917624  211246 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 18:24:14.101107  211246 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-729957
	
	I1018 18:24:14.101135  211246 ubuntu.go:182] provisioning hostname "no-preload-729957"
	I1018 18:24:14.101201  211246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:24:14.138725  211246 main.go:141] libmachine: Using SSH client type: native
	I1018 18:24:14.139031  211246 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1018 18:24:14.139048  211246 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-729957 && echo "no-preload-729957" | sudo tee /etc/hostname
	I1018 18:24:14.177408  211246 cache.go:157] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 18:24:14.177434  211246 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.797165358s
	I1018 18:24:14.177446  211246 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 18:24:14.177484  211246 cache.go:87] Successfully saved all images to host disk.
	I1018 18:24:14.325447  211246 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-729957
	
	I1018 18:24:14.325548  211246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:24:14.349347  211246 main.go:141] libmachine: Using SSH client type: native
	I1018 18:24:14.349654  211246 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1018 18:24:14.349683  211246 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-729957' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-729957/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-729957' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 18:24:14.505130  211246 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 18:24:14.505233  211246 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 18:24:14.505275  211246 ubuntu.go:190] setting up certificates
	I1018 18:24:14.505289  211246 provision.go:84] configureAuth start
	I1018 18:24:14.505349  211246 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-729957
	I1018 18:24:14.522687  211246 provision.go:143] copyHostCerts
	I1018 18:24:14.522758  211246 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 18:24:14.522768  211246 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 18:24:14.522841  211246 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 18:24:14.522934  211246 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 18:24:14.522943  211246 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 18:24:14.522970  211246 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 18:24:14.523025  211246 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 18:24:14.523033  211246 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 18:24:14.523057  211246 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 18:24:14.523107  211246 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.no-preload-729957 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-729957]
	I1018 18:24:14.965279  211246 provision.go:177] copyRemoteCerts
	I1018 18:24:14.965369  211246 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 18:24:14.965437  211246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:24:14.988184  211246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa Username:docker}
	I1018 18:24:15.101158  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 18:24:15.120851  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 18:24:15.139092  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 18:24:15.157498  211246 provision.go:87] duration metric: took 652.185795ms to configureAuth
	I1018 18:24:15.157527  211246 ubuntu.go:206] setting minikube options for container-runtime
	I1018 18:24:15.157722  211246 config.go:182] Loaded profile config "no-preload-729957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:24:15.157830  211246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:24:15.175635  211246 main.go:141] libmachine: Using SSH client type: native
	I1018 18:24:15.175939  211246 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1018 18:24:15.175962  211246 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 18:24:15.534216  211246 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 18:24:15.534285  211246 machine.go:96] duration metric: took 1.657211758s to provisionDockerMachine
	I1018 18:24:15.534301  211246 client.go:171] duration metric: took 4.125543986s to LocalClient.Create
	I1018 18:24:15.534320  211246 start.go:167] duration metric: took 4.125619803s to libmachine.API.Create "no-preload-729957"
	I1018 18:24:15.534327  211246 start.go:293] postStartSetup for "no-preload-729957" (driver="docker")
	I1018 18:24:15.534338  211246 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 18:24:15.534413  211246 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 18:24:15.534460  211246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:24:15.559857  211246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa Username:docker}
	I1018 18:24:15.666256  211246 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 18:24:15.669830  211246 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 18:24:15.669865  211246 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 18:24:15.669877  211246 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 18:24:15.669940  211246 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 18:24:15.670014  211246 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 18:24:15.670114  211246 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 18:24:15.678138  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:24:15.696512  211246 start.go:296] duration metric: took 162.170042ms for postStartSetup
	I1018 18:24:15.696892  211246 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-729957
	I1018 18:24:15.715340  211246 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/config.json ...
	I1018 18:24:15.715664  211246 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 18:24:15.715720  211246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:24:15.736145  211246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa Username:docker}
	I1018 18:24:15.838316  211246 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 18:24:15.843499  211246 start.go:128] duration metric: took 4.438826812s to createHost
	I1018 18:24:15.843524  211246 start.go:83] releasing machines lock for "no-preload-729957", held for 4.438969525s
	I1018 18:24:15.843593  211246 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-729957
	I1018 18:24:15.860278  211246 ssh_runner.go:195] Run: cat /version.json
	I1018 18:24:15.860336  211246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:24:15.860384  211246 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 18:24:15.860444  211246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:24:15.881444  211246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa Username:docker}
	I1018 18:24:15.889042  211246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa Username:docker}
	I1018 18:24:16.106267  211246 ssh_runner.go:195] Run: systemctl --version
	I1018 18:24:16.112790  211246 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 18:24:16.153317  211246 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 18:24:16.157761  211246 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 18:24:16.157837  211246 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 18:24:16.189272  211246 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 18:24:16.189294  211246 start.go:495] detecting cgroup driver to use...
	I1018 18:24:16.189356  211246 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 18:24:16.189430  211246 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 18:24:16.208009  211246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 18:24:16.221047  211246 docker.go:218] disabling cri-docker service (if available) ...
	I1018 18:24:16.221116  211246 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 18:24:16.238482  211246 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 18:24:16.258556  211246 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 18:24:16.382340  211246 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 18:24:16.524303  211246 docker.go:234] disabling docker service ...
	I1018 18:24:16.524381  211246 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 18:24:16.550220  211246 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 18:24:16.564324  211246 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 18:24:16.679749  211246 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 18:24:16.805043  211246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 18:24:16.817740  211246 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 18:24:16.834847  211246 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 18:24:16.834931  211246 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:16.844558  211246 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 18:24:16.844650  211246 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:16.854225  211246 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:16.863365  211246 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:16.871962  211246 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 18:24:16.880144  211246 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:16.889133  211246 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:16.903120  211246 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:16.912187  211246 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 18:24:16.920611  211246 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 18:24:16.928352  211246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:24:17.045018  211246 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 18:24:17.185414  211246 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 18:24:17.185526  211246 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 18:24:17.189598  211246 start.go:563] Will wait 60s for crictl version
	I1018 18:24:17.189701  211246 ssh_runner.go:195] Run: which crictl
	I1018 18:24:17.193611  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 18:24:17.221430  211246 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 18:24:17.221576  211246 ssh_runner.go:195] Run: crio --version
	I1018 18:24:17.254227  211246 ssh_runner.go:195] Run: crio --version
	I1018 18:24:17.308172  211246 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1018 18:24:14.977029  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	W1018 18:24:17.475961  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	W1018 18:24:19.480061  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	I1018 18:24:17.310905  211246 cli_runner.go:164] Run: docker network inspect no-preload-729957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:24:17.326411  211246 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 18:24:17.330295  211246 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:24:17.339707  211246 kubeadm.go:883] updating cluster {Name:no-preload-729957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-729957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 18:24:17.339818  211246 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:24:17.339861  211246 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:24:17.365535  211246 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1018 18:24:17.365560  211246 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1018 18:24:17.365605  211246 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:24:17.365824  211246 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 18:24:17.365932  211246 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 18:24:17.366021  211246 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 18:24:17.366124  211246 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 18:24:17.366219  211246 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1018 18:24:17.366314  211246 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1018 18:24:17.366441  211246 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 18:24:17.367466  211246 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 18:24:17.367707  211246 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1018 18:24:17.367862  211246 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 18:24:17.367993  211246 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1018 18:24:17.368155  211246 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 18:24:17.368302  211246 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 18:24:17.368449  211246 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:24:17.368768  211246 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 18:24:17.588075  211246 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1018 18:24:17.588588  211246 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1018 18:24:17.604431  211246 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1018 18:24:17.604533  211246 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1018 18:24:17.609488  211246 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1018 18:24:17.617110  211246 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 18:24:17.617287  211246 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1018 18:24:17.671559  211246 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1018 18:24:17.671609  211246 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 18:24:17.671665  211246 ssh_runner.go:195] Run: which crictl
	I1018 18:24:17.671746  211246 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1018 18:24:17.671765  211246 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1018 18:24:17.671806  211246 ssh_runner.go:195] Run: which crictl
	I1018 18:24:17.755580  211246 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1018 18:24:17.755623  211246 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 18:24:17.755680  211246 ssh_runner.go:195] Run: which crictl
	I1018 18:24:17.755793  211246 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1018 18:24:17.755923  211246 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1018 18:24:17.755955  211246 ssh_runner.go:195] Run: which crictl
	I1018 18:24:17.755870  211246 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1018 18:24:17.756020  211246 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 18:24:17.756052  211246 ssh_runner.go:195] Run: which crictl
	I1018 18:24:17.755832  211246 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1018 18:24:17.756094  211246 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 18:24:17.756130  211246 ssh_runner.go:195] Run: which crictl
	I1018 18:24:17.761752  211246 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1018 18:24:17.761793  211246 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 18:24:17.761870  211246 ssh_runner.go:195] Run: which crictl
	I1018 18:24:17.761982  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1018 18:24:17.762049  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1018 18:24:17.768037  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1018 18:24:17.768121  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1018 18:24:17.768197  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1018 18:24:17.768237  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1018 18:24:17.868189  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 18:24:17.868215  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1018 18:24:17.868356  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1018 18:24:17.868387  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1018 18:24:17.871626  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1018 18:24:17.871746  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1018 18:24:17.871820  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1018 18:24:17.963982  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1018 18:24:17.964084  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1018 18:24:17.964213  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 18:24:17.998750  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1018 18:24:17.998851  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1018 18:24:17.998931  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1018 18:24:17.998994  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1018 18:24:18.066399  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 18:24:18.066510  211246 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1018 18:24:18.066597  211246 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1018 18:24:18.066670  211246 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1018 18:24:18.066719  211246 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1018 18:24:18.126106  211246 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1018 18:24:18.126160  211246 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1018 18:24:18.126329  211246 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1018 18:24:18.126426  211246 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1018 18:24:18.127060  211246 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1018 18:24:18.127155  211246 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1018 18:24:18.138662  211246 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1018 18:24:18.138996  211246 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1018 18:24:18.138770  211246 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1018 18:24:18.139062  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1018 18:24:18.138805  211246 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1018 18:24:18.139135  211246 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1018 18:24:18.138826  211246 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1018 18:24:18.139164  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1018 18:24:18.138850  211246 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1018 18:24:18.139188  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1018 18:24:18.138867  211246 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1018 18:24:18.139212  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1018 18:24:18.138884  211246 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1018 18:24:18.139233  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1018 18:24:18.206044  211246 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1018 18:24:18.206089  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1018 18:24:18.206158  211246 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1018 18:24:18.206195  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1018 18:24:18.254107  211246 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1018 18:24:18.254235  211246 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1018 18:24:18.720444  211246 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1018 18:24:18.720530  211246 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1018 18:24:18.720609  211246 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	W1018 18:24:18.848540  211246 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1018 18:24:18.848843  211246 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:24:20.619745  211246 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.89907958s)
	I1018 18:24:20.619770  211246 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1018 18:24:20.619788  211246 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1018 18:24:20.619787  211246 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.770896306s)
	I1018 18:24:20.619824  211246 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1018 18:24:20.619840  211246 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1018 18:24:20.619850  211246 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:24:20.619889  211246 ssh_runner.go:195] Run: which crictl
	I1018 18:24:20.976874  207600 pod_ready.go:94] pod "coredns-66bc5c9577-grf2z" is "Ready"
	I1018 18:24:20.976901  207600 pod_ready.go:86] duration metric: took 40.505774144s for pod "coredns-66bc5c9577-grf2z" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:24:20.987304  207600 pod_ready.go:83] waiting for pod "etcd-embed-certs-213943" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:24:20.993519  207600 pod_ready.go:94] pod "etcd-embed-certs-213943" is "Ready"
	I1018 18:24:20.993599  207600 pod_ready.go:86] duration metric: took 6.256516ms for pod "etcd-embed-certs-213943" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:24:20.995869  207600 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-213943" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:24:21.000461  207600 pod_ready.go:94] pod "kube-apiserver-embed-certs-213943" is "Ready"
	I1018 18:24:21.000484  207600 pod_ready.go:86] duration metric: took 4.593511ms for pod "kube-apiserver-embed-certs-213943" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:24:21.005649  207600 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-213943" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:24:21.175657  207600 pod_ready.go:94] pod "kube-controller-manager-embed-certs-213943" is "Ready"
	I1018 18:24:21.175686  207600 pod_ready.go:86] duration metric: took 169.959197ms for pod "kube-controller-manager-embed-certs-213943" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:24:21.376185  207600 pod_ready.go:83] waiting for pod "kube-proxy-gcf8n" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:24:21.775427  207600 pod_ready.go:94] pod "kube-proxy-gcf8n" is "Ready"
	I1018 18:24:21.775456  207600 pod_ready.go:86] duration metric: took 399.243127ms for pod "kube-proxy-gcf8n" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:24:21.975729  207600 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-213943" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:24:22.375901  207600 pod_ready.go:94] pod "kube-scheduler-embed-certs-213943" is "Ready"
	I1018 18:24:22.375933  207600 pod_ready.go:86] duration metric: took 400.174461ms for pod "kube-scheduler-embed-certs-213943" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:24:22.375957  207600 pod_ready.go:40] duration metric: took 41.946495197s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:24:22.447152  207600 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 18:24:22.451036  207600 out.go:179] * Done! kubectl is now configured to use "embed-certs-213943" cluster and "default" namespace by default
	I1018 18:24:22.312563  211246 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.69270153s)
	I1018 18:24:22.312593  211246 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1018 18:24:22.312611  211246 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1018 18:24:22.312666  211246 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1018 18:24:22.312734  211246 ssh_runner.go:235] Completed: which crictl: (1.692837671s)
	I1018 18:24:22.312763  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:24:23.838821  211246 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.526036784s)
	I1018 18:24:23.838901  211246 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.526213656s)
	I1018 18:24:23.838913  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:24:23.838920  211246 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1018 18:24:23.838939  211246 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1018 18:24:23.838977  211246 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1018 18:24:25.095042  211246 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.256044s)
	I1018 18:24:25.095070  211246 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1018 18:24:25.095089  211246 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1018 18:24:25.095138  211246 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1018 18:24:25.095206  211246 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.256281385s)
	I1018 18:24:25.095239  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:24:26.460880  211246 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.365621596s)
	I1018 18:24:26.460924  211246 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1018 18:24:26.461012  211246 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.365853469s)
	I1018 18:24:26.461029  211246 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1018 18:24:26.461045  211246 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1018 18:24:26.461067  211246 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1018 18:24:26.461082  211246 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1018 18:24:26.466210  211246 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1018 18:24:26.466242  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1018 18:24:30.292538  211246 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.831434085s)
	I1018 18:24:30.292561  211246 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1018 18:24:30.292579  211246 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1018 18:24:30.292641  211246 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1018 18:24:30.901199  211246 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1018 18:24:30.901230  211246 cache_images.go:124] Successfully loaded all cached images
	I1018 18:24:30.901237  211246 cache_images.go:93] duration metric: took 13.535661895s to LoadCachedImages
	I1018 18:24:30.901248  211246 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 18:24:30.901419  211246 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-729957 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-729957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 18:24:30.901542  211246 ssh_runner.go:195] Run: crio config
	I1018 18:24:30.974078  211246 cni.go:84] Creating CNI manager for ""
	I1018 18:24:30.974101  211246 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:24:30.974121  211246 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 18:24:30.974143  211246 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-729957 NodeName:no-preload-729957 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 18:24:30.974269  211246 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-729957"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 18:24:30.974347  211246 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 18:24:30.983149  211246 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1018 18:24:30.983253  211246 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1018 18:24:30.990902  211246 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1018 18:24:30.990992  211246 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1018 18:24:30.991823  211246 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1018 18:24:30.992301  211246 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1018 18:24:30.995178  211246 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1018 18:24:30.995215  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1018 18:24:31.923855  211246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:24:31.953404  211246 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1018 18:24:31.957932  211246 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1018 18:24:31.958017  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1018 18:24:32.008734  211246 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1018 18:24:32.033560  211246 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1018 18:24:32.033605  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1018 18:24:32.651210  211246 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 18:24:32.661264  211246 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 18:24:32.678917  211246 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 18:24:32.696365  211246 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 18:24:32.714063  211246 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 18:24:32.718652  211246 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:24:32.730262  211246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:24:32.863622  211246 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:24:32.885523  211246 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957 for IP: 192.168.76.2
	I1018 18:24:32.885543  211246 certs.go:195] generating shared ca certs ...
	I1018 18:24:32.885560  211246 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:32.885741  211246 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 18:24:32.885815  211246 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 18:24:32.885829  211246 certs.go:257] generating profile certs ...
	I1018 18:24:32.885901  211246 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.key
	I1018 18:24:32.885921  211246 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.crt with IP's: []
	I1018 18:24:33.405996  211246 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.crt ...
	I1018 18:24:33.406031  211246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.crt: {Name:mkb88d1fc4eda926df0094c266b80eb07c0c6248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:33.406216  211246 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.key ...
	I1018 18:24:33.406231  211246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.key: {Name:mk17de1715fbe442811f136f345e4d2d5d6152ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:33.411612  211246 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/apiserver.key.1af67460
	I1018 18:24:33.411644  211246 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/apiserver.crt.1af67460 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1018 18:24:34.012012  211246 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/apiserver.crt.1af67460 ...
	I1018 18:24:34.013017  211246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/apiserver.crt.1af67460: {Name:mk01e269c988943fbd6908ef6682c4890911893c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:34.013280  211246 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/apiserver.key.1af67460 ...
	I1018 18:24:34.013295  211246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/apiserver.key.1af67460: {Name:mk7125dd8d24a5b02578b34f7f552895728fedff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:34.013391  211246 certs.go:382] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/apiserver.crt.1af67460 -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/apiserver.crt
	I1018 18:24:34.013480  211246 certs.go:386] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/apiserver.key.1af67460 -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/apiserver.key
	I1018 18:24:34.013536  211246 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/proxy-client.key
	I1018 18:24:34.013550  211246 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/proxy-client.crt with IP's: []
	I1018 18:24:34.598586  211246 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/proxy-client.crt ...
	I1018 18:24:34.598705  211246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/proxy-client.crt: {Name:mk701079d8162ef4118880b8525ea3a22971b851 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:34.602663  211246 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/proxy-client.key ...
	I1018 18:24:34.602683  211246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/proxy-client.key: {Name:mkd0883998c50dca58e9d17878c2db1d77087a43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:34.602875  211246 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 18:24:34.602911  211246 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 18:24:34.602920  211246 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 18:24:34.602944  211246 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 18:24:34.602965  211246 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 18:24:34.602990  211246 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 18:24:34.603044  211246 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:24:34.603630  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 18:24:34.627876  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 18:24:34.655530  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 18:24:34.684716  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 18:24:34.709101  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 18:24:34.729135  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 18:24:34.748843  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 18:24:34.771623  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 18:24:34.791618  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 18:24:34.810983  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 18:24:34.842137  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 18:24:34.861364  211246 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 18:24:34.875784  211246 ssh_runner.go:195] Run: openssl version
	I1018 18:24:34.884739  211246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 18:24:34.894638  211246 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:24:34.903014  211246 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:24:34.903082  211246 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:24:34.969989  211246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 18:24:34.978550  211246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 18:24:34.986875  211246 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 18:24:34.991142  211246 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 18:24:34.991205  211246 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 18:24:35.033632  211246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 18:24:35.042547  211246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 18:24:35.051062  211246 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 18:24:35.055370  211246 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 18:24:35.055438  211246 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 18:24:35.098321  211246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 18:24:35.107436  211246 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 18:24:35.111272  211246 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 18:24:35.111327  211246 kubeadm.go:400] StartCluster: {Name:no-preload-729957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-729957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:24:35.111399  211246 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 18:24:35.111478  211246 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 18:24:35.138568  211246 cri.go:89] found id: ""
	I1018 18:24:35.138664  211246 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 18:24:35.148036  211246 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 18:24:35.156345  211246 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 18:24:35.156422  211246 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 18:24:35.169226  211246 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 18:24:35.169249  211246 kubeadm.go:157] found existing configuration files:
	
	I1018 18:24:35.169322  211246 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 18:24:35.178752  211246 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 18:24:35.178831  211246 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 18:24:35.186961  211246 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 18:24:35.195614  211246 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 18:24:35.195692  211246 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 18:24:35.203346  211246 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 18:24:35.211938  211246 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 18:24:35.212028  211246 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 18:24:35.219789  211246 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 18:24:35.240853  211246 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 18:24:35.240972  211246 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 18:24:35.251364  211246 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 18:24:35.313066  211246 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 18:24:35.313439  211246 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 18:24:35.339221  211246 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 18:24:35.339303  211246 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 18:24:35.339347  211246 kubeadm.go:318] OS: Linux
	I1018 18:24:35.339419  211246 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 18:24:35.339488  211246 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 18:24:35.339559  211246 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 18:24:35.339624  211246 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 18:24:35.339689  211246 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 18:24:35.339783  211246 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 18:24:35.339848  211246 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 18:24:35.339918  211246 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 18:24:35.339981  211246 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 18:24:35.406398  211246 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 18:24:35.406538  211246 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 18:24:35.406652  211246 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 18:24:35.422809  211246 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 18:24:35.429788  211246 out.go:252]   - Generating certificates and keys ...
	I1018 18:24:35.429907  211246 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 18:24:35.429975  211246 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	
	
	==> CRI-O <==
	Oct 18 18:24:10 embed-certs-213943 crio[649]: time="2025-10-18T18:24:10.016688175Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cc7c1f34-cd0c-45d1-b43b-3d1192e89c38 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:24:10 embed-certs-213943 crio[649]: time="2025-10-18T18:24:10.018165938Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d89ba3d3-c0bd-45a4-a5eb-78dccdf037c5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:24:10 embed-certs-213943 crio[649]: time="2025-10-18T18:24:10.018481872Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:24:10 embed-certs-213943 crio[649]: time="2025-10-18T18:24:10.027923806Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:24:10 embed-certs-213943 crio[649]: time="2025-10-18T18:24:10.028101483Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a20754eaa05d40c9cd62c858d6b1fb7a930d979ed875730d0583dcb90e0f24d0/merged/etc/passwd: no such file or directory"
	Oct 18 18:24:10 embed-certs-213943 crio[649]: time="2025-10-18T18:24:10.028125138Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a20754eaa05d40c9cd62c858d6b1fb7a930d979ed875730d0583dcb90e0f24d0/merged/etc/group: no such file or directory"
	Oct 18 18:24:10 embed-certs-213943 crio[649]: time="2025-10-18T18:24:10.028371Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:24:10 embed-certs-213943 crio[649]: time="2025-10-18T18:24:10.060176413Z" level=info msg="Created container 6ea0fc669c5a5aed268bc4f1b1959ec658c78291c197f49575e209481a5d2d96: kube-system/storage-provisioner/storage-provisioner" id=d89ba3d3-c0bd-45a4-a5eb-78dccdf037c5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:24:10 embed-certs-213943 crio[649]: time="2025-10-18T18:24:10.061114026Z" level=info msg="Starting container: 6ea0fc669c5a5aed268bc4f1b1959ec658c78291c197f49575e209481a5d2d96" id=224c2175-184e-4a30-b8f5-7293cbf89c49 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:24:10 embed-certs-213943 crio[649]: time="2025-10-18T18:24:10.063519414Z" level=info msg="Started container" PID=1639 containerID=6ea0fc669c5a5aed268bc4f1b1959ec658c78291c197f49575e209481a5d2d96 description=kube-system/storage-provisioner/storage-provisioner id=224c2175-184e-4a30-b8f5-7293cbf89c49 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c72a61929cd732ab0dea5ab47285a4323ef4ca0517453a4abbf4abb6b9ee1ec4
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.909983581Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.914454407Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.914617724Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.914707547Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.918618629Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.918766077Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.918842353Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.922135548Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.922269433Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.922337528Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.927718643Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.927896754Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.927974105Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.9310837Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.931225732Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	6ea0fc669c5a5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   c72a61929cd73       storage-provisioner                          kube-system
	b1cbdd377acf4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago       Exited              dashboard-metrics-scraper   2                   ec7f93e23b19b       dashboard-metrics-scraper-6ffb444bf9-vn8f9   kubernetes-dashboard
	e9de5e570569b       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   45 seconds ago       Running             kubernetes-dashboard        0                   f7422b56b39a8       kubernetes-dashboard-855c9754f9-nmbd5        kubernetes-dashboard
	b3913cfee7fb2       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   bedb791b15a59       busybox                                      default
	c7e1739778739       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   c72a61929cd73       storage-provisioner                          kube-system
	630e3f457293e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   5db8954cad24f       kindnet-44fc8                                kube-system
	16aec3adc07ff       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   6d0a0d5a3b94e       coredns-66bc5c9577-grf2z                     kube-system
	39d6593c6d8d5       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   5a7d1d72602dc       kube-proxy-gcf8n                             kube-system
	97b7723e6cc93       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   1537da0dc601d       kube-apiserver-embed-certs-213943            kube-system
	9ae5471fee776       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   c24c7d62ddf7d       kube-controller-manager-embed-certs-213943   kube-system
	579b2e90159d3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   e0bf1ab11eafe       kube-scheduler-embed-certs-213943            kube-system
	320b2b6a0f723       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   6b6cedbf06b06       etcd-embed-certs-213943                      kube-system
	
	
	==> coredns [16aec3adc07fffcd5545d9bd12ca76fc45c9f92f49291dbfa7eb00de6d54c0ac] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38027 - 16333 "HINFO IN 6750387169992443311.6936307795065856942. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.056612977s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-213943
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-213943
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=embed-certs-213943
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T18_22_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 18:22:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-213943
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 18:24:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 18:24:08 +0000   Sat, 18 Oct 2025 18:22:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 18:24:08 +0000   Sat, 18 Oct 2025 18:22:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 18:24:08 +0000   Sat, 18 Oct 2025 18:22:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 18:24:08 +0000   Sat, 18 Oct 2025 18:22:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-213943
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                af083a40-edc0-4386-b2b1-7b1c8d51d4fc
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-66bc5c9577-grf2z                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m24s
	  kube-system                 etcd-embed-certs-213943                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m30s
	  kube-system                 kindnet-44fc8                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m24s
	  kube-system                 kube-apiserver-embed-certs-213943             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-controller-manager-embed-certs-213943    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-proxy-gcf8n                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-scheduler-embed-certs-213943             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vn8f9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-nmbd5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m22s                  kube-proxy       
	  Normal   Starting                 58s                    kube-proxy       
	  Normal   Starting                 2m37s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m37s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m36s (x8 over 2m36s)  kubelet          Node embed-certs-213943 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m36s (x8 over 2m36s)  kubelet          Node embed-certs-213943 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m36s (x8 over 2m36s)  kubelet          Node embed-certs-213943 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m29s                  kubelet          Node embed-certs-213943 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m29s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m29s                  kubelet          Node embed-certs-213943 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m29s                  kubelet          Node embed-certs-213943 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m29s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m25s                  node-controller  Node embed-certs-213943 event: Registered Node embed-certs-213943 in Controller
	  Normal   NodeReady                103s                   kubelet          Node embed-certs-213943 status is now: NodeReady
	  Normal   Starting                 67s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 67s)      kubelet          Node embed-certs-213943 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 67s)      kubelet          Node embed-certs-213943 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 67s)      kubelet          Node embed-certs-213943 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node embed-certs-213943 event: Registered Node embed-certs-213943 in Controller
	
	
	==> dmesg <==
	[Oct18 18:04] overlayfs: idmapped layers are currently not supported
	[ +24.403909] overlayfs: idmapped layers are currently not supported
	[  +6.162774] overlayfs: idmapped layers are currently not supported
	[Oct18 18:05] overlayfs: idmapped layers are currently not supported
	[ +25.128760] overlayfs: idmapped layers are currently not supported
	[Oct18 18:06] overlayfs: idmapped layers are currently not supported
	[Oct18 18:07] overlayfs: idmapped layers are currently not supported
	[Oct18 18:08] overlayfs: idmapped layers are currently not supported
	[Oct18 18:09] overlayfs: idmapped layers are currently not supported
	[Oct18 18:11] overlayfs: idmapped layers are currently not supported
	[Oct18 18:13] overlayfs: idmapped layers are currently not supported
	[ +30.969240] overlayfs: idmapped layers are currently not supported
	[Oct18 18:15] overlayfs: idmapped layers are currently not supported
	[Oct18 18:16] overlayfs: idmapped layers are currently not supported
	[Oct18 18:17] overlayfs: idmapped layers are currently not supported
	[ +23.167826] overlayfs: idmapped layers are currently not supported
	[Oct18 18:18] overlayfs: idmapped layers are currently not supported
	[ +38.509809] overlayfs: idmapped layers are currently not supported
	[Oct18 18:19] overlayfs: idmapped layers are currently not supported
	[Oct18 18:21] overlayfs: idmapped layers are currently not supported
	[Oct18 18:22] overlayfs: idmapped layers are currently not supported
	[Oct18 18:23] overlayfs: idmapped layers are currently not supported
	[ +30.822562] overlayfs: idmapped layers are currently not supported
	[Oct18 18:24] bpfilter: read fail -512
	[ +10.607871] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [320b2b6a0f723790bef132bc7d46d0c55becfa751e8cd836c15cde5c23b0446d] <==
	{"level":"warn","ts":"2025-10-18T18:23:35.723610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:35.754965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:35.797634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:35.816660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:35.849241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:35.879741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:35.898191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:35.927602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:35.992485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.038857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.081116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.101663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.140300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.177799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.213185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.265157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.321237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.345809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.386713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.429901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.542096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.561374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.636496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.656243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.736140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47656","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:24:38 up  2:07,  0 user,  load average: 2.89, 2.97, 2.76
	Linux embed-certs-213943 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [630e3f457293e1639be23c9cecc27705318c350d2ca0ae9fa75f375bfdf573c8] <==
	I1018 18:23:39.615867       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 18:23:39.616265       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 18:23:39.616437       1 main.go:148] setting mtu 1500 for CNI 
	I1018 18:23:39.616482       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 18:23:39.616520       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T18:23:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 18:23:39.909975       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 18:23:39.910001       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 18:23:39.910010       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 18:23:39.910130       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 18:24:09.907427       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 18:24:09.908584       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 18:24:09.910930       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 18:24:09.911044       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1018 18:24:11.211145       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 18:24:11.211269       1 metrics.go:72] Registering metrics
	I1018 18:24:11.211527       1 controller.go:711] "Syncing nftables rules"
	I1018 18:24:19.909020       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 18:24:19.909070       1 main.go:301] handling current node
	I1018 18:24:29.906765       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 18:24:29.906809       1 main.go:301] handling current node
	
	
	==> kube-apiserver [97b7723e6cc93259a63a7dc305c6dd7a4974876e6dc283507e6d8ce5af737bcb] <==
	I1018 18:23:37.978770       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 18:23:37.987292       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 18:23:37.987658       1 cache.go:39] Caches are synced for autoregister controller
	I1018 18:23:37.993348       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 18:23:37.999445       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 18:23:37.999554       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 18:23:38.005359       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 18:23:38.016749       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 18:23:38.017454       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 18:23:38.018151       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 18:23:38.019344       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 18:23:38.019466       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 18:23:38.019622       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1018 18:23:38.049382       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 18:23:38.702921       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 18:23:38.841385       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 18:23:39.013187       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 18:23:39.375880       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 18:23:39.481270       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 18:23:39.525940       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 18:23:39.826147       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.167.49"}
	I1018 18:23:39.848196       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.206.42"}
	I1018 18:23:42.239878       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 18:23:42.539683       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 18:23:42.588843       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9ae5471fee776db561d720631098bdc12432bd23b92d88eb2d07deb57fed51ac] <==
	I1018 18:23:42.161719       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 18:23:42.163443       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 18:23:42.174923       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 18:23:42.175018       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 18:23:42.175046       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 18:23:42.175459       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 18:23:42.175799       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 18:23:42.179495       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 18:23:42.182556       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 18:23:42.182669       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 18:23:42.182683       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 18:23:42.182702       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 18:23:42.182717       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 18:23:42.182728       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 18:23:42.187811       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 18:23:42.187956       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 18:23:42.188020       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 18:23:42.188070       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 18:23:42.191932       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 18:23:42.192052       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 18:23:42.192087       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 18:23:42.194481       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 18:23:42.194669       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 18:23:42.195258       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-213943"
	I1018 18:23:42.195377       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [39d6593c6d8d54c71c1c11426effcafa05b750b8b4e8c8f61eccd2fde32ca8ec] <==
	I1018 18:23:39.660800       1 server_linux.go:53] "Using iptables proxy"
	I1018 18:23:39.840484       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 18:23:39.945048       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 18:23:39.946374       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 18:23:39.946533       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 18:23:39.986656       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 18:23:39.986728       1 server_linux.go:132] "Using iptables Proxier"
	I1018 18:23:39.991323       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 18:23:39.991733       1 server.go:527] "Version info" version="v1.34.1"
	I1018 18:23:39.991758       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:23:39.993535       1 config.go:200] "Starting service config controller"
	I1018 18:23:39.993562       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 18:23:39.993579       1 config.go:106] "Starting endpoint slice config controller"
	I1018 18:23:39.993583       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 18:23:39.993594       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 18:23:39.993598       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 18:23:40.020121       1 config.go:309] "Starting node config controller"
	I1018 18:23:40.020146       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 18:23:40.020154       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 18:23:40.093875       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 18:23:40.093934       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 18:23:40.094002       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [579b2e90159d3f472f72b4d74cead642311dbb50b6aa56372bed6e44fa5f0026] <==
	I1018 18:23:36.282302       1 serving.go:386] Generated self-signed cert in-memory
	I1018 18:23:39.155189       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 18:23:39.155223       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:23:39.200691       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 18:23:39.200790       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 18:23:39.200813       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 18:23:39.200855       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 18:23:39.226337       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:23:39.226377       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:23:39.226398       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:23:39.226407       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:23:39.311447       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 18:23:39.327267       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:23:39.328115       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 18:23:42 embed-certs-213943 kubelet[777]: I1018 18:23:42.958132     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/94a45217-a2e0-4738-a1ba-b67ebd545bcf-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-vn8f9\" (UID: \"94a45217-a2e0-4738-a1ba-b67ebd545bcf\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn8f9"
	Oct 18 18:23:42 embed-certs-213943 kubelet[777]: I1018 18:23:42.958160     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rxdb\" (UniqueName: \"kubernetes.io/projected/94a45217-a2e0-4738-a1ba-b67ebd545bcf-kube-api-access-4rxdb\") pod \"dashboard-metrics-scraper-6ffb444bf9-vn8f9\" (UID: \"94a45217-a2e0-4738-a1ba-b67ebd545bcf\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn8f9"
	Oct 18 18:23:42 embed-certs-213943 kubelet[777]: I1018 18:23:42.958187     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfqtf\" (UniqueName: \"kubernetes.io/projected/0c6652bc-b6db-4827-91e8-190090a50541-kube-api-access-xfqtf\") pod \"kubernetes-dashboard-855c9754f9-nmbd5\" (UID: \"0c6652bc-b6db-4827-91e8-190090a50541\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nmbd5"
	Oct 18 18:23:43 embed-certs-213943 kubelet[777]: W1018 18:23:43.418689     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6/crio-f7422b56b39a8b8e2df04760b13976c8965e9db1c910e1dd260aa1eef5d4f402 WatchSource:0}: Error finding container f7422b56b39a8b8e2df04760b13976c8965e9db1c910e1dd260aa1eef5d4f402: Status 404 returned error can't find the container with id f7422b56b39a8b8e2df04760b13976c8965e9db1c910e1dd260aa1eef5d4f402
	Oct 18 18:23:47 embed-certs-213943 kubelet[777]: I1018 18:23:47.932476     777 scope.go:117] "RemoveContainer" containerID="68a722b442dce387f18e7e7cef708fd1ba3e349c7d4b12bd3d1eacb3ac296a37"
	Oct 18 18:23:48 embed-certs-213943 kubelet[777]: I1018 18:23:48.945543     777 scope.go:117] "RemoveContainer" containerID="fe68bf64cd0266dc4e20202680682a4d788bd9f315979e364f0db43903f7f49a"
	Oct 18 18:23:48 embed-certs-213943 kubelet[777]: E1018 18:23:48.945704     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vn8f9_kubernetes-dashboard(94a45217-a2e0-4738-a1ba-b67ebd545bcf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn8f9" podUID="94a45217-a2e0-4738-a1ba-b67ebd545bcf"
	Oct 18 18:23:48 embed-certs-213943 kubelet[777]: I1018 18:23:48.948663     777 scope.go:117] "RemoveContainer" containerID="68a722b442dce387f18e7e7cef708fd1ba3e349c7d4b12bd3d1eacb3ac296a37"
	Oct 18 18:23:49 embed-certs-213943 kubelet[777]: I1018 18:23:49.953318     777 scope.go:117] "RemoveContainer" containerID="fe68bf64cd0266dc4e20202680682a4d788bd9f315979e364f0db43903f7f49a"
	Oct 18 18:23:49 embed-certs-213943 kubelet[777]: E1018 18:23:49.953929     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vn8f9_kubernetes-dashboard(94a45217-a2e0-4738-a1ba-b67ebd545bcf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn8f9" podUID="94a45217-a2e0-4738-a1ba-b67ebd545bcf"
	Oct 18 18:23:53 embed-certs-213943 kubelet[777]: I1018 18:23:53.075119     777 scope.go:117] "RemoveContainer" containerID="fe68bf64cd0266dc4e20202680682a4d788bd9f315979e364f0db43903f7f49a"
	Oct 18 18:23:53 embed-certs-213943 kubelet[777]: E1018 18:23:53.075336     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vn8f9_kubernetes-dashboard(94a45217-a2e0-4738-a1ba-b67ebd545bcf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn8f9" podUID="94a45217-a2e0-4738-a1ba-b67ebd545bcf"
	Oct 18 18:24:07 embed-certs-213943 kubelet[777]: I1018 18:24:07.819003     777 scope.go:117] "RemoveContainer" containerID="fe68bf64cd0266dc4e20202680682a4d788bd9f315979e364f0db43903f7f49a"
	Oct 18 18:24:08 embed-certs-213943 kubelet[777]: I1018 18:24:08.001695     777 scope.go:117] "RemoveContainer" containerID="fe68bf64cd0266dc4e20202680682a4d788bd9f315979e364f0db43903f7f49a"
	Oct 18 18:24:08 embed-certs-213943 kubelet[777]: I1018 18:24:08.002154     777 scope.go:117] "RemoveContainer" containerID="b1cbdd377acf4ce0ba012efbe8a92d490f1cf26de33b65c0792311ca69b2f97d"
	Oct 18 18:24:08 embed-certs-213943 kubelet[777]: E1018 18:24:08.002865     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vn8f9_kubernetes-dashboard(94a45217-a2e0-4738-a1ba-b67ebd545bcf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn8f9" podUID="94a45217-a2e0-4738-a1ba-b67ebd545bcf"
	Oct 18 18:24:08 embed-certs-213943 kubelet[777]: I1018 18:24:08.056202     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nmbd5" podStartSLOduration=16.643287148 podStartE2EDuration="26.054462058s" podCreationTimestamp="2025-10-18 18:23:42 +0000 UTC" firstStartedPulling="2025-10-18 18:23:43.424571663 +0000 UTC m=+11.814421641" lastFinishedPulling="2025-10-18 18:23:52.835746573 +0000 UTC m=+21.225596551" observedRunningTime="2025-10-18 18:23:52.980625077 +0000 UTC m=+21.370475063" watchObservedRunningTime="2025-10-18 18:24:08.054462058 +0000 UTC m=+36.444312036"
	Oct 18 18:24:10 embed-certs-213943 kubelet[777]: I1018 18:24:10.014024     777 scope.go:117] "RemoveContainer" containerID="c7e17397787390cfe2e365edc60882b35fef038d500e72ed7964bce1242d4793"
	Oct 18 18:24:13 embed-certs-213943 kubelet[777]: I1018 18:24:13.075523     777 scope.go:117] "RemoveContainer" containerID="b1cbdd377acf4ce0ba012efbe8a92d490f1cf26de33b65c0792311ca69b2f97d"
	Oct 18 18:24:13 embed-certs-213943 kubelet[777]: E1018 18:24:13.076363     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vn8f9_kubernetes-dashboard(94a45217-a2e0-4738-a1ba-b67ebd545bcf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn8f9" podUID="94a45217-a2e0-4738-a1ba-b67ebd545bcf"
	Oct 18 18:24:24 embed-certs-213943 kubelet[777]: I1018 18:24:24.818913     777 scope.go:117] "RemoveContainer" containerID="b1cbdd377acf4ce0ba012efbe8a92d490f1cf26de33b65c0792311ca69b2f97d"
	Oct 18 18:24:24 embed-certs-213943 kubelet[777]: E1018 18:24:24.819090     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vn8f9_kubernetes-dashboard(94a45217-a2e0-4738-a1ba-b67ebd545bcf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn8f9" podUID="94a45217-a2e0-4738-a1ba-b67ebd545bcf"
	Oct 18 18:24:35 embed-certs-213943 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 18:24:35 embed-certs-213943 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 18:24:35 embed-certs-213943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e9de5e570569bf04ee4708a292c5a4963413811ea3989c2d9d52ea34af3ed27e] <==
	2025/10/18 18:23:52 Using namespace: kubernetes-dashboard
	2025/10/18 18:23:52 Using in-cluster config to connect to apiserver
	2025/10/18 18:23:52 Using secret token for csrf signing
	2025/10/18 18:23:52 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 18:23:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 18:23:52 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 18:23:52 Generating JWE encryption key
	2025/10/18 18:23:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 18:23:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 18:23:53 Initializing JWE encryption key from synchronized object
	2025/10/18 18:23:53 Creating in-cluster Sidecar client
	2025/10/18 18:23:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 18:23:53 Serving insecurely on HTTP port: 9090
	2025/10/18 18:24:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 18:23:52 Starting overwatch
	
	
	==> storage-provisioner [6ea0fc669c5a5aed268bc4f1b1959ec658c78291c197f49575e209481a5d2d96] <==
	I1018 18:24:10.094501       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 18:24:10.094576       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 18:24:10.099471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:13.554976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:17.815138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:21.414513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:24.468589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:27.490811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:27.499583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 18:24:27.499837       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 18:24:27.500044       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-213943_b322ac88-cd24-422e-8b88-68dd31ec1db6!
	I1018 18:24:27.504928       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"df0c2d90-b1dc-4b33-97ec-b51fa8382283", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-213943_b322ac88-cd24-422e-8b88-68dd31ec1db6 became leader
	W1018 18:24:27.505186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:27.515086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 18:24:27.606163       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-213943_b322ac88-cd24-422e-8b88-68dd31ec1db6!
	W1018 18:24:29.517832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:29.534669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:31.541539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:31.554621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:33.557471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:33.568620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:35.572843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:35.582516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:37.587546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:37.601906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c7e17397787390cfe2e365edc60882b35fef038d500e72ed7964bce1242d4793] <==
	I1018 18:23:39.760320       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 18:24:09.762444       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-213943 -n embed-certs-213943
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-213943 -n embed-certs-213943: exit status 2 (581.958584ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-213943 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-213943
helpers_test.go:243: (dbg) docker inspect embed-certs-213943:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6",
	        "Created": "2025-10-18T18:21:41.10994787Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 207729,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T18:23:24.901773449Z",
	            "FinishedAt": "2025-10-18T18:23:23.346386885Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6/hosts",
	        "LogPath": "/var/lib/docker/containers/f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6/f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6-json.log",
	        "Name": "/embed-certs-213943",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-213943:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-213943",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6",
	                "LowerDir": "/var/lib/docker/overlay2/5ae3bc0eef02b15432a8f6a5068c9db91f9b4ede8c0e696a3d1cf388220bd2a0-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5ae3bc0eef02b15432a8f6a5068c9db91f9b4ede8c0e696a3d1cf388220bd2a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5ae3bc0eef02b15432a8f6a5068c9db91f9b4ede8c0e696a3d1cf388220bd2a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5ae3bc0eef02b15432a8f6a5068c9db91f9b4ede8c0e696a3d1cf388220bd2a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-213943",
	                "Source": "/var/lib/docker/volumes/embed-certs-213943/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-213943",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-213943",
	                "name.minikube.sigs.k8s.io": "embed-certs-213943",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "292c25d22c4d9e11f46faa2ed367e503eb41b676995716fa90e11979d4b0c620",
	            "SandboxKey": "/var/run/docker/netns/292c25d22c4d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-213943": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:2b:3c:2a:0b:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "efe92dc8c8166df0c3008dadfb93e08ef35b4f9b392d6a8aee91eaee89568b86",
	                    "EndpointID": "8afb40a859bca5f3ddae67dcdb5e5c6065e66e48ead1cf82cb0cab54eeff0b2a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-213943",
	                        "f6d884df9095"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-213943 -n embed-certs-213943
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-213943 -n embed-certs-213943: exit status 2 (464.765776ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-213943 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-213943 logs -n 25: (1.556107726s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-918475 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:19 UTC │ 18 Oct 25 18:20 UTC │
	│ image   │ old-k8s-version-918475 image list --format=json                                                                                                                                                                                               │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:20 UTC │ 18 Oct 25 18:20 UTC │
	│ pause   │ -p old-k8s-version-918475 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:20 UTC │                     │
	│ delete  │ -p old-k8s-version-918475                                                                                                                                                                                                                     │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:20 UTC │ 18 Oct 25 18:21 UTC │
	│ start   │ -p cert-expiration-463770 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-463770       │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:21 UTC │
	│ delete  │ -p old-k8s-version-918475                                                                                                                                                                                                                     │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:21 UTC │
	│ start   │ -p default-k8s-diff-port-192562 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:22 UTC │
	│ delete  │ -p cert-expiration-463770                                                                                                                                                                                                                     │ cert-expiration-463770       │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:21 UTC │
	│ start   │ -p embed-certs-213943 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-192562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-192562 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │ 18 Oct 25 18:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-192562 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │ 18 Oct 25 18:22 UTC │
	│ start   │ -p default-k8s-diff-port-192562 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │ 18 Oct 25 18:23 UTC │
	│ addons  │ enable metrics-server -p embed-certs-213943 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │                     │
	│ stop    │ -p embed-certs-213943 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │ 18 Oct 25 18:23 UTC │
	│ addons  │ enable dashboard -p embed-certs-213943 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │ 18 Oct 25 18:23 UTC │
	│ start   │ -p embed-certs-213943 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │ 18 Oct 25 18:24 UTC │
	│ image   │ default-k8s-diff-port-192562 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ pause   │ -p default-k8s-diff-port-192562 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-192562                                                                                                                                                                                                               │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ delete  │ -p default-k8s-diff-port-192562                                                                                                                                                                                                               │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ delete  │ -p disable-driver-mounts-747178                                                                                                                                                                                                               │ disable-driver-mounts-747178 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ start   │ -p no-preload-729957 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │                     │
	│ image   │ embed-certs-213943 image list --format=json                                                                                                                                                                                                   │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ pause   │ -p embed-certs-213943 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 18:24:11
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 18:24:11.168077  211246 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:24:11.168215  211246 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:24:11.168229  211246 out.go:374] Setting ErrFile to fd 2...
	I1018 18:24:11.168589  211246 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:24:11.169047  211246 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:24:11.169905  211246 out.go:368] Setting JSON to false
	I1018 18:24:11.171376  211246 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7601,"bootTime":1760804251,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 18:24:11.171505  211246 start.go:141] virtualization:  
	I1018 18:24:11.175567  211246 out.go:179] * [no-preload-729957] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 18:24:11.178810  211246 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 18:24:11.178962  211246 notify.go:220] Checking for updates...
	I1018 18:24:11.184816  211246 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 18:24:11.188052  211246 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:24:11.191101  211246 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 18:24:11.193984  211246 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 18:24:11.197079  211246 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 18:24:11.200515  211246 config.go:182] Loaded profile config "embed-certs-213943": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:24:11.200670  211246 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 18:24:11.229043  211246 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 18:24:11.229219  211246 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:24:11.296651  211246 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 18:24:11.280863285 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:24:11.296769  211246 docker.go:318] overlay module found
	I1018 18:24:11.300145  211246 out.go:179] * Using the docker driver based on user configuration
	I1018 18:24:11.303173  211246 start.go:305] selected driver: docker
	I1018 18:24:11.303208  211246 start.go:925] validating driver "docker" against <nil>
	I1018 18:24:11.303223  211246 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 18:24:11.303929  211246 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:24:11.358700  211246 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 18:24:11.349159169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:24:11.358864  211246 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 18:24:11.359879  211246 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:24:11.362894  211246 out.go:179] * Using Docker driver with root privileges
	I1018 18:24:11.365765  211246 cni.go:84] Creating CNI manager for ""
	I1018 18:24:11.365840  211246 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:24:11.365852  211246 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 18:24:11.365942  211246 start.go:349] cluster config:
	{Name:no-preload-729957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-729957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:24:11.369098  211246 out.go:179] * Starting "no-preload-729957" primary control-plane node in "no-preload-729957" cluster
	I1018 18:24:11.371934  211246 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 18:24:11.374878  211246 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 18:24:11.377933  211246 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:24:11.378024  211246 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 18:24:11.378073  211246 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/config.json ...
	I1018 18:24:11.378103  211246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/config.json: {Name:mk28e9d6cea09f76141683dde674f4cd54d76e44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:11.378340  211246 cache.go:107] acquiring lock: {Name:mkfe0c95c3696c6ee6d6bee7d1ad713b9bd021b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:24:11.378407  211246 cache.go:115] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 18:24:11.378419  211246 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 88.715µs
	I1018 18:24:11.378432  211246 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 18:24:11.378445  211246 cache.go:107] acquiring lock: {Name:mkd26b3798aaf66fcad945e0c1a60f0824366e40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:24:11.378517  211246 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 18:24:11.378856  211246 cache.go:107] acquiring lock: {Name:mkd3282648be7d83ac0e67296042440acb53052b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:24:11.378957  211246 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 18:24:11.379199  211246 cache.go:107] acquiring lock: {Name:mk6a37c53550d30a6c5a6027e63e35937896f954 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:24:11.379302  211246 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 18:24:11.379538  211246 cache.go:107] acquiring lock: {Name:mk2fda38822643b1c863eb02b4b58b1c8beea2d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:24:11.379648  211246 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 18:24:11.379896  211246 cache.go:107] acquiring lock: {Name:mk3a776414901f1896d41bf7105926b8db2f104a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:24:11.380019  211246 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1018 18:24:11.380272  211246 cache.go:107] acquiring lock: {Name:mka02bf3e7fa031efb5dd0162aedd881c5c29af2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:24:11.380413  211246 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1018 18:24:11.380670  211246 cache.go:107] acquiring lock: {Name:mke59697c6719748ff18c4e99b2595c9da08adaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:24:11.380879  211246 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 18:24:11.383769  211246 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 18:24:11.384225  211246 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 18:24:11.384405  211246 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1018 18:24:11.384535  211246 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 18:24:11.384806  211246 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 18:24:11.385453  211246 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1018 18:24:11.385923  211246 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 18:24:11.404377  211246 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 18:24:11.404400  211246 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 18:24:11.404418  211246 cache.go:232] Successfully downloaded all kic artifacts
	I1018 18:24:11.404441  211246 start.go:360] acquireMachinesLock for no-preload-729957: {Name:mke750361707948cde27a747cd8852fabeab5692 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:24:11.404541  211246 start.go:364] duration metric: took 85.179µs to acquireMachinesLock for "no-preload-729957"
	I1018 18:24:11.404571  211246 start.go:93] Provisioning new machine with config: &{Name:no-preload-729957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-729957 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 18:24:11.404657  211246 start.go:125] createHost starting for "" (driver="docker")
	W1018 18:24:10.477378  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	W1018 18:24:12.477864  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	I1018 18:24:11.408406  211246 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 18:24:11.408705  211246 start.go:159] libmachine.API.Create for "no-preload-729957" (driver="docker")
	I1018 18:24:11.408750  211246 client.go:168] LocalClient.Create starting
	I1018 18:24:11.408828  211246 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem
	I1018 18:24:11.408875  211246 main.go:141] libmachine: Decoding PEM data...
	I1018 18:24:11.408912  211246 main.go:141] libmachine: Parsing certificate...
	I1018 18:24:11.409042  211246 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem
	I1018 18:24:11.409076  211246 main.go:141] libmachine: Decoding PEM data...
	I1018 18:24:11.409097  211246 main.go:141] libmachine: Parsing certificate...
	I1018 18:24:11.409521  211246 cli_runner.go:164] Run: docker network inspect no-preload-729957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 18:24:11.437735  211246 cli_runner.go:211] docker network inspect no-preload-729957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 18:24:11.437811  211246 network_create.go:284] running [docker network inspect no-preload-729957] to gather additional debugging logs...
	I1018 18:24:11.437834  211246 cli_runner.go:164] Run: docker network inspect no-preload-729957
	W1018 18:24:11.454585  211246 cli_runner.go:211] docker network inspect no-preload-729957 returned with exit code 1
	I1018 18:24:11.454614  211246 network_create.go:287] error running [docker network inspect no-preload-729957]: docker network inspect no-preload-729957: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-729957 not found
	I1018 18:24:11.454629  211246 network_create.go:289] output of [docker network inspect no-preload-729957]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-729957 not found
	
	** /stderr **
	I1018 18:24:11.454726  211246 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:24:11.470946  211246 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-903568cdf824 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:7a:80:c0:8c:ed} reservation:<nil>}
	I1018 18:24:11.471278  211246 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ee9fcaab9ca8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:a7:65:1b:c0:41} reservation:<nil>}
	I1018 18:24:11.471601  211246 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-414fc11e154b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:86:f0:a8:1a:86:00} reservation:<nil>}
	I1018 18:24:11.473509  211246 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c64100}
	I1018 18:24:11.473539  211246 network_create.go:124] attempt to create docker network no-preload-729957 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1018 18:24:11.473595  211246 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-729957 no-preload-729957
	I1018 18:24:11.546112  211246 network_create.go:108] docker network no-preload-729957 192.168.76.0/24 created
	I1018 18:24:11.546146  211246 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-729957" container
	I1018 18:24:11.546234  211246 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 18:24:11.567452  211246 cli_runner.go:164] Run: docker volume create no-preload-729957 --label name.minikube.sigs.k8s.io=no-preload-729957 --label created_by.minikube.sigs.k8s.io=true
	I1018 18:24:11.586702  211246 oci.go:103] Successfully created a docker volume no-preload-729957
	I1018 18:24:11.586798  211246 cli_runner.go:164] Run: docker run --rm --name no-preload-729957-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-729957 --entrypoint /usr/bin/test -v no-preload-729957:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 18:24:11.709734  211246 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1018 18:24:11.746271  211246 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1018 18:24:11.749272  211246 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1018 18:24:11.757327  211246 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1018 18:24:11.758631  211246 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1018 18:24:11.772040  211246 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1018 18:24:11.792103  211246 cache.go:162] opening:  /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1018 18:24:11.803795  211246 cache.go:157] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1018 18:24:11.803825  211246 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 423.930056ms
	I1018 18:24:11.803838  211246 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 18:24:12.230277  211246 oci.go:107] Successfully prepared a docker volume no-preload-729957
	I1018 18:24:12.230322  211246 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1018 18:24:12.230452  211246 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 18:24:12.230567  211246 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 18:24:12.258173  211246 cache.go:157] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 18:24:12.258199  211246 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 878.663764ms
	I1018 18:24:12.258213  211246 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 18:24:12.303433  211246 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-729957 --name no-preload-729957 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-729957 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-729957 --network no-preload-729957 --ip 192.168.76.2 --volume no-preload-729957:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 18:24:12.711996  211246 cache.go:157] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 18:24:12.712030  211246 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.332834708s
	I1018 18:24:12.712043  211246 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 18:24:12.749221  211246 cache.go:157] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 18:24:12.749290  211246 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.370436906s
	I1018 18:24:12.749318  211246 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 18:24:12.781266  211246 cli_runner.go:164] Run: docker container inspect no-preload-729957 --format={{.State.Running}}
	I1018 18:24:12.803112  211246 cache.go:157] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 18:24:12.803187  211246 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.422519298s
	I1018 18:24:12.803214  211246 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 18:24:12.836730  211246 cli_runner.go:164] Run: docker container inspect no-preload-729957 --format={{.State.Status}}
	I1018 18:24:12.857442  211246 cache.go:157] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 18:24:12.857515  211246 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.479068159s
	I1018 18:24:12.857544  211246 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 18:24:12.906521  211246 cli_runner.go:164] Run: docker exec no-preload-729957 stat /var/lib/dpkg/alternatives/iptables
	I1018 18:24:12.987387  211246 oci.go:144] the created container "no-preload-729957" has a running status.
	I1018 18:24:12.987457  211246 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa...
	I1018 18:24:13.720510  211246 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 18:24:13.744148  211246 cli_runner.go:164] Run: docker container inspect no-preload-729957 --format={{.State.Status}}
	I1018 18:24:13.780666  211246 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 18:24:13.780741  211246 kic_runner.go:114] Args: [docker exec --privileged no-preload-729957 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 18:24:13.850033  211246 cli_runner.go:164] Run: docker container inspect no-preload-729957 --format={{.State.Status}}
	I1018 18:24:13.877052  211246 machine.go:93] provisionDockerMachine start ...
	I1018 18:24:13.877147  211246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:24:13.917220  211246 main.go:141] libmachine: Using SSH client type: native
	I1018 18:24:13.917612  211246 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1018 18:24:13.917624  211246 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 18:24:14.101107  211246 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-729957
	
	I1018 18:24:14.101135  211246 ubuntu.go:182] provisioning hostname "no-preload-729957"
	I1018 18:24:14.101201  211246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:24:14.138725  211246 main.go:141] libmachine: Using SSH client type: native
	I1018 18:24:14.139031  211246 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1018 18:24:14.139048  211246 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-729957 && echo "no-preload-729957" | sudo tee /etc/hostname
	I1018 18:24:14.177408  211246 cache.go:157] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 18:24:14.177434  211246 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.797165358s
	I1018 18:24:14.177446  211246 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 18:24:14.177484  211246 cache.go:87] Successfully saved all images to host disk.
	I1018 18:24:14.325447  211246 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-729957
	
	I1018 18:24:14.325548  211246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:24:14.349347  211246 main.go:141] libmachine: Using SSH client type: native
	I1018 18:24:14.349654  211246 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1018 18:24:14.349683  211246 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-729957' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-729957/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-729957' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 18:24:14.505130  211246 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 18:24:14.505233  211246 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 18:24:14.505275  211246 ubuntu.go:190] setting up certificates
	I1018 18:24:14.505289  211246 provision.go:84] configureAuth start
	I1018 18:24:14.505349  211246 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-729957
	I1018 18:24:14.522687  211246 provision.go:143] copyHostCerts
	I1018 18:24:14.522758  211246 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 18:24:14.522768  211246 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 18:24:14.522841  211246 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 18:24:14.522934  211246 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 18:24:14.522943  211246 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 18:24:14.522970  211246 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 18:24:14.523025  211246 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 18:24:14.523033  211246 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 18:24:14.523057  211246 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 18:24:14.523107  211246 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.no-preload-729957 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-729957]
	I1018 18:24:14.965279  211246 provision.go:177] copyRemoteCerts
	I1018 18:24:14.965369  211246 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 18:24:14.965437  211246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:24:14.988184  211246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa Username:docker}
	I1018 18:24:15.101158  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 18:24:15.120851  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 18:24:15.139092  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 18:24:15.157498  211246 provision.go:87] duration metric: took 652.185795ms to configureAuth
	I1018 18:24:15.157527  211246 ubuntu.go:206] setting minikube options for container-runtime
	I1018 18:24:15.157722  211246 config.go:182] Loaded profile config "no-preload-729957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:24:15.157830  211246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:24:15.175635  211246 main.go:141] libmachine: Using SSH client type: native
	I1018 18:24:15.175939  211246 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1018 18:24:15.175962  211246 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 18:24:15.534216  211246 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 18:24:15.534285  211246 machine.go:96] duration metric: took 1.657211758s to provisionDockerMachine
	I1018 18:24:15.534301  211246 client.go:171] duration metric: took 4.125543986s to LocalClient.Create
	I1018 18:24:15.534320  211246 start.go:167] duration metric: took 4.125619803s to libmachine.API.Create "no-preload-729957"
	I1018 18:24:15.534327  211246 start.go:293] postStartSetup for "no-preload-729957" (driver="docker")
	I1018 18:24:15.534338  211246 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 18:24:15.534413  211246 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 18:24:15.534460  211246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:24:15.559857  211246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa Username:docker}
	I1018 18:24:15.666256  211246 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 18:24:15.669830  211246 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 18:24:15.669865  211246 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 18:24:15.669877  211246 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 18:24:15.669940  211246 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 18:24:15.670014  211246 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 18:24:15.670114  211246 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 18:24:15.678138  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:24:15.696512  211246 start.go:296] duration metric: took 162.170042ms for postStartSetup
	I1018 18:24:15.696892  211246 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-729957
	I1018 18:24:15.715340  211246 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/config.json ...
	I1018 18:24:15.715664  211246 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 18:24:15.715720  211246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:24:15.736145  211246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa Username:docker}
	I1018 18:24:15.838316  211246 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 18:24:15.843499  211246 start.go:128] duration metric: took 4.438826812s to createHost
	I1018 18:24:15.843524  211246 start.go:83] releasing machines lock for "no-preload-729957", held for 4.438969525s
	I1018 18:24:15.843593  211246 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-729957
	I1018 18:24:15.860278  211246 ssh_runner.go:195] Run: cat /version.json
	I1018 18:24:15.860336  211246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:24:15.860384  211246 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 18:24:15.860444  211246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:24:15.881444  211246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa Username:docker}
	I1018 18:24:15.889042  211246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa Username:docker}
	I1018 18:24:16.106267  211246 ssh_runner.go:195] Run: systemctl --version
	I1018 18:24:16.112790  211246 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 18:24:16.153317  211246 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 18:24:16.157761  211246 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 18:24:16.157837  211246 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 18:24:16.189272  211246 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 18:24:16.189294  211246 start.go:495] detecting cgroup driver to use...
	I1018 18:24:16.189356  211246 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 18:24:16.189430  211246 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 18:24:16.208009  211246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 18:24:16.221047  211246 docker.go:218] disabling cri-docker service (if available) ...
	I1018 18:24:16.221116  211246 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 18:24:16.238482  211246 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 18:24:16.258556  211246 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 18:24:16.382340  211246 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 18:24:16.524303  211246 docker.go:234] disabling docker service ...
	I1018 18:24:16.524381  211246 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 18:24:16.550220  211246 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 18:24:16.564324  211246 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 18:24:16.679749  211246 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 18:24:16.805043  211246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 18:24:16.817740  211246 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 18:24:16.834847  211246 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 18:24:16.834931  211246 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:16.844558  211246 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 18:24:16.844650  211246 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:16.854225  211246 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:16.863365  211246 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:16.871962  211246 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 18:24:16.880144  211246 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:16.889133  211246 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:16.903120  211246 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:16.912187  211246 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 18:24:16.920611  211246 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 18:24:16.928352  211246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:24:17.045018  211246 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 18:24:17.185414  211246 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 18:24:17.185526  211246 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 18:24:17.189598  211246 start.go:563] Will wait 60s for crictl version
	I1018 18:24:17.189701  211246 ssh_runner.go:195] Run: which crictl
	I1018 18:24:17.193611  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 18:24:17.221430  211246 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 18:24:17.221576  211246 ssh_runner.go:195] Run: crio --version
	I1018 18:24:17.254227  211246 ssh_runner.go:195] Run: crio --version
	I1018 18:24:17.308172  211246 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1018 18:24:14.977029  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	W1018 18:24:17.475961  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	W1018 18:24:19.480061  207600 pod_ready.go:104] pod "coredns-66bc5c9577-grf2z" is not "Ready", error: <nil>
	I1018 18:24:17.310905  211246 cli_runner.go:164] Run: docker network inspect no-preload-729957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:24:17.326411  211246 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 18:24:17.330295  211246 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:24:17.339707  211246 kubeadm.go:883] updating cluster {Name:no-preload-729957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-729957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 18:24:17.339818  211246 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:24:17.339861  211246 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:24:17.365535  211246 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1018 18:24:17.365560  211246 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1018 18:24:17.365605  211246 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:24:17.365824  211246 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 18:24:17.365932  211246 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 18:24:17.366021  211246 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 18:24:17.366124  211246 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 18:24:17.366219  211246 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1018 18:24:17.366314  211246 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1018 18:24:17.366441  211246 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 18:24:17.367466  211246 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 18:24:17.367707  211246 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1018 18:24:17.367862  211246 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 18:24:17.367993  211246 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1018 18:24:17.368155  211246 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 18:24:17.368302  211246 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 18:24:17.368449  211246 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:24:17.368768  211246 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 18:24:17.588075  211246 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1018 18:24:17.588588  211246 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1018 18:24:17.604431  211246 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1018 18:24:17.604533  211246 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1018 18:24:17.609488  211246 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1018 18:24:17.617110  211246 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 18:24:17.617287  211246 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1018 18:24:17.671559  211246 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1018 18:24:17.671609  211246 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 18:24:17.671665  211246 ssh_runner.go:195] Run: which crictl
	I1018 18:24:17.671746  211246 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1018 18:24:17.671765  211246 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1018 18:24:17.671806  211246 ssh_runner.go:195] Run: which crictl
	I1018 18:24:17.755580  211246 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1018 18:24:17.755623  211246 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 18:24:17.755680  211246 ssh_runner.go:195] Run: which crictl
	I1018 18:24:17.755793  211246 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1018 18:24:17.755923  211246 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1018 18:24:17.755955  211246 ssh_runner.go:195] Run: which crictl
	I1018 18:24:17.755870  211246 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1018 18:24:17.756020  211246 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 18:24:17.756052  211246 ssh_runner.go:195] Run: which crictl
	I1018 18:24:17.755832  211246 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1018 18:24:17.756094  211246 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 18:24:17.756130  211246 ssh_runner.go:195] Run: which crictl
	I1018 18:24:17.761752  211246 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1018 18:24:17.761793  211246 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 18:24:17.761870  211246 ssh_runner.go:195] Run: which crictl
	I1018 18:24:17.761982  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1018 18:24:17.762049  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1018 18:24:17.768037  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1018 18:24:17.768121  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1018 18:24:17.768197  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1018 18:24:17.768237  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1018 18:24:17.868189  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 18:24:17.868215  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1018 18:24:17.868356  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1018 18:24:17.868387  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1018 18:24:17.871626  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1018 18:24:17.871746  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1018 18:24:17.871820  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1018 18:24:17.963982  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1018 18:24:17.964084  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1018 18:24:17.964213  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 18:24:17.998750  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1018 18:24:17.998851  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1018 18:24:17.998931  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1018 18:24:17.998994  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1018 18:24:18.066399  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 18:24:18.066510  211246 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1018 18:24:18.066597  211246 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1018 18:24:18.066670  211246 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1018 18:24:18.066719  211246 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1018 18:24:18.126106  211246 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1018 18:24:18.126160  211246 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1018 18:24:18.126329  211246 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1018 18:24:18.126426  211246 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1018 18:24:18.127060  211246 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1018 18:24:18.127155  211246 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1018 18:24:18.138662  211246 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1018 18:24:18.138996  211246 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1018 18:24:18.138770  211246 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1018 18:24:18.139062  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1018 18:24:18.138805  211246 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1018 18:24:18.139135  211246 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1018 18:24:18.138826  211246 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1018 18:24:18.139164  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1018 18:24:18.138850  211246 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1018 18:24:18.139188  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1018 18:24:18.138867  211246 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1018 18:24:18.139212  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1018 18:24:18.138884  211246 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1018 18:24:18.139233  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1018 18:24:18.206044  211246 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1018 18:24:18.206089  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1018 18:24:18.206158  211246 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1018 18:24:18.206195  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1018 18:24:18.254107  211246 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1018 18:24:18.254235  211246 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1018 18:24:18.720444  211246 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1018 18:24:18.720530  211246 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1018 18:24:18.720609  211246 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	W1018 18:24:18.848540  211246 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1018 18:24:18.848843  211246 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:24:20.619745  211246 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.89907958s)
	I1018 18:24:20.619770  211246 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1018 18:24:20.619788  211246 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1018 18:24:20.619787  211246 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.770896306s)
	I1018 18:24:20.619824  211246 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1018 18:24:20.619840  211246 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1018 18:24:20.619850  211246 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:24:20.619889  211246 ssh_runner.go:195] Run: which crictl
	I1018 18:24:20.976874  207600 pod_ready.go:94] pod "coredns-66bc5c9577-grf2z" is "Ready"
	I1018 18:24:20.976901  207600 pod_ready.go:86] duration metric: took 40.505774144s for pod "coredns-66bc5c9577-grf2z" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:24:20.987304  207600 pod_ready.go:83] waiting for pod "etcd-embed-certs-213943" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:24:20.993519  207600 pod_ready.go:94] pod "etcd-embed-certs-213943" is "Ready"
	I1018 18:24:20.993599  207600 pod_ready.go:86] duration metric: took 6.256516ms for pod "etcd-embed-certs-213943" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:24:20.995869  207600 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-213943" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:24:21.000461  207600 pod_ready.go:94] pod "kube-apiserver-embed-certs-213943" is "Ready"
	I1018 18:24:21.000484  207600 pod_ready.go:86] duration metric: took 4.593511ms for pod "kube-apiserver-embed-certs-213943" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:24:21.005649  207600 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-213943" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:24:21.175657  207600 pod_ready.go:94] pod "kube-controller-manager-embed-certs-213943" is "Ready"
	I1018 18:24:21.175686  207600 pod_ready.go:86] duration metric: took 169.959197ms for pod "kube-controller-manager-embed-certs-213943" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:24:21.376185  207600 pod_ready.go:83] waiting for pod "kube-proxy-gcf8n" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:24:21.775427  207600 pod_ready.go:94] pod "kube-proxy-gcf8n" is "Ready"
	I1018 18:24:21.775456  207600 pod_ready.go:86] duration metric: took 399.243127ms for pod "kube-proxy-gcf8n" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:24:21.975729  207600 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-213943" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:24:22.375901  207600 pod_ready.go:94] pod "kube-scheduler-embed-certs-213943" is "Ready"
	I1018 18:24:22.375933  207600 pod_ready.go:86] duration metric: took 400.174461ms for pod "kube-scheduler-embed-certs-213943" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:24:22.375957  207600 pod_ready.go:40] duration metric: took 41.946495197s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:24:22.447152  207600 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 18:24:22.451036  207600 out.go:179] * Done! kubectl is now configured to use "embed-certs-213943" cluster and "default" namespace by default
	I1018 18:24:22.312563  211246 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.69270153s)
	I1018 18:24:22.312593  211246 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1018 18:24:22.312611  211246 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1018 18:24:22.312666  211246 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1018 18:24:22.312734  211246 ssh_runner.go:235] Completed: which crictl: (1.692837671s)
	I1018 18:24:22.312763  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:24:23.838821  211246 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.526036784s)
	I1018 18:24:23.838901  211246 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.526213656s)
	I1018 18:24:23.838913  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:24:23.838920  211246 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1018 18:24:23.838939  211246 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1018 18:24:23.838977  211246 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1018 18:24:25.095042  211246 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.256044s)
	I1018 18:24:25.095070  211246 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1018 18:24:25.095089  211246 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1018 18:24:25.095138  211246 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1018 18:24:25.095206  211246 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.256281385s)
	I1018 18:24:25.095239  211246 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:24:26.460880  211246 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.365621596s)
	I1018 18:24:26.460924  211246 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1018 18:24:26.461012  211246 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.365853469s)
	I1018 18:24:26.461029  211246 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1018 18:24:26.461045  211246 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1018 18:24:26.461067  211246 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1018 18:24:26.461082  211246 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1018 18:24:26.466210  211246 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1018 18:24:26.466242  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1018 18:24:30.292538  211246 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.831434085s)
	I1018 18:24:30.292561  211246 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1018 18:24:30.292579  211246 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1018 18:24:30.292641  211246 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1018 18:24:30.901199  211246 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1018 18:24:30.901230  211246 cache_images.go:124] Successfully loaded all cached images
	I1018 18:24:30.901237  211246 cache_images.go:93] duration metric: took 13.535661895s to LoadCachedImages
	I1018 18:24:30.901248  211246 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 18:24:30.901419  211246 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-729957 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-729957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 18:24:30.901542  211246 ssh_runner.go:195] Run: crio config
	I1018 18:24:30.974078  211246 cni.go:84] Creating CNI manager for ""
	I1018 18:24:30.974101  211246 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:24:30.974121  211246 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 18:24:30.974143  211246 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-729957 NodeName:no-preload-729957 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 18:24:30.974269  211246 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-729957"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 18:24:30.974347  211246 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 18:24:30.983149  211246 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1018 18:24:30.983253  211246 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1018 18:24:30.990902  211246 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1018 18:24:30.990992  211246 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1018 18:24:30.991823  211246 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1018 18:24:30.992301  211246 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1018 18:24:30.995178  211246 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1018 18:24:30.995215  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1018 18:24:31.923855  211246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:24:31.953404  211246 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1018 18:24:31.957932  211246 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1018 18:24:31.958017  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1018 18:24:32.008734  211246 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1018 18:24:32.033560  211246 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1018 18:24:32.033605  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1018 18:24:32.651210  211246 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 18:24:32.661264  211246 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 18:24:32.678917  211246 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 18:24:32.696365  211246 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 18:24:32.714063  211246 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 18:24:32.718652  211246 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:24:32.730262  211246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:24:32.863622  211246 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:24:32.885523  211246 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957 for IP: 192.168.76.2
	I1018 18:24:32.885543  211246 certs.go:195] generating shared ca certs ...
	I1018 18:24:32.885560  211246 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:32.885741  211246 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 18:24:32.885815  211246 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 18:24:32.885829  211246 certs.go:257] generating profile certs ...
	I1018 18:24:32.885901  211246 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.key
	I1018 18:24:32.885921  211246 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.crt with IP's: []
	I1018 18:24:33.405996  211246 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.crt ...
	I1018 18:24:33.406031  211246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.crt: {Name:mkb88d1fc4eda926df0094c266b80eb07c0c6248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:33.406216  211246 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.key ...
	I1018 18:24:33.406231  211246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.key: {Name:mk17de1715fbe442811f136f345e4d2d5d6152ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:33.411612  211246 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/apiserver.key.1af67460
	I1018 18:24:33.411644  211246 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/apiserver.crt.1af67460 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1018 18:24:34.012012  211246 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/apiserver.crt.1af67460 ...
	I1018 18:24:34.013017  211246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/apiserver.crt.1af67460: {Name:mk01e269c988943fbd6908ef6682c4890911893c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:34.013280  211246 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/apiserver.key.1af67460 ...
	I1018 18:24:34.013295  211246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/apiserver.key.1af67460: {Name:mk7125dd8d24a5b02578b34f7f552895728fedff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:34.013391  211246 certs.go:382] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/apiserver.crt.1af67460 -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/apiserver.crt
	I1018 18:24:34.013480  211246 certs.go:386] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/apiserver.key.1af67460 -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/apiserver.key
	I1018 18:24:34.013536  211246 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/proxy-client.key
	I1018 18:24:34.013550  211246 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/proxy-client.crt with IP's: []
	I1018 18:24:34.598586  211246 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/proxy-client.crt ...
	I1018 18:24:34.598705  211246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/proxy-client.crt: {Name:mk701079d8162ef4118880b8525ea3a22971b851 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:34.602663  211246 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/proxy-client.key ...
	I1018 18:24:34.602683  211246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/proxy-client.key: {Name:mkd0883998c50dca58e9d17878c2db1d77087a43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:34.602875  211246 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 18:24:34.602911  211246 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 18:24:34.602920  211246 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 18:24:34.602944  211246 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 18:24:34.602965  211246 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 18:24:34.602990  211246 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 18:24:34.603044  211246 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:24:34.603630  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 18:24:34.627876  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 18:24:34.655530  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 18:24:34.684716  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 18:24:34.709101  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 18:24:34.729135  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 18:24:34.748843  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 18:24:34.771623  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 18:24:34.791618  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 18:24:34.810983  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 18:24:34.842137  211246 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 18:24:34.861364  211246 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 18:24:34.875784  211246 ssh_runner.go:195] Run: openssl version
	I1018 18:24:34.884739  211246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 18:24:34.894638  211246 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:24:34.903014  211246 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:24:34.903082  211246 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:24:34.969989  211246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 18:24:34.978550  211246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 18:24:34.986875  211246 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 18:24:34.991142  211246 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 18:24:34.991205  211246 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 18:24:35.033632  211246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 18:24:35.042547  211246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 18:24:35.051062  211246 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 18:24:35.055370  211246 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 18:24:35.055438  211246 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 18:24:35.098321  211246 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 18:24:35.107436  211246 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 18:24:35.111272  211246 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 18:24:35.111327  211246 kubeadm.go:400] StartCluster: {Name:no-preload-729957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-729957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:24:35.111399  211246 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 18:24:35.111478  211246 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 18:24:35.138568  211246 cri.go:89] found id: ""
	I1018 18:24:35.138664  211246 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 18:24:35.148036  211246 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 18:24:35.156345  211246 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 18:24:35.156422  211246 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 18:24:35.169226  211246 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 18:24:35.169249  211246 kubeadm.go:157] found existing configuration files:
	
	I1018 18:24:35.169322  211246 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 18:24:35.178752  211246 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 18:24:35.178831  211246 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 18:24:35.186961  211246 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 18:24:35.195614  211246 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 18:24:35.195692  211246 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 18:24:35.203346  211246 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 18:24:35.211938  211246 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 18:24:35.212028  211246 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 18:24:35.219789  211246 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 18:24:35.240853  211246 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 18:24:35.240972  211246 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 18:24:35.251364  211246 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 18:24:35.313066  211246 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 18:24:35.313439  211246 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 18:24:35.339221  211246 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 18:24:35.339303  211246 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 18:24:35.339347  211246 kubeadm.go:318] OS: Linux
	I1018 18:24:35.339419  211246 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 18:24:35.339488  211246 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 18:24:35.339559  211246 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 18:24:35.339624  211246 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 18:24:35.339689  211246 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 18:24:35.339783  211246 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 18:24:35.339848  211246 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 18:24:35.339918  211246 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 18:24:35.339981  211246 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 18:24:35.406398  211246 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 18:24:35.406538  211246 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 18:24:35.406652  211246 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 18:24:35.422809  211246 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 18:24:35.429788  211246 out.go:252]   - Generating certificates and keys ...
	I1018 18:24:35.429907  211246 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 18:24:35.429975  211246 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	
	
	==> CRI-O <==
	Oct 18 18:24:10 embed-certs-213943 crio[649]: time="2025-10-18T18:24:10.016688175Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cc7c1f34-cd0c-45d1-b43b-3d1192e89c38 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:24:10 embed-certs-213943 crio[649]: time="2025-10-18T18:24:10.018165938Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d89ba3d3-c0bd-45a4-a5eb-78dccdf037c5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:24:10 embed-certs-213943 crio[649]: time="2025-10-18T18:24:10.018481872Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:24:10 embed-certs-213943 crio[649]: time="2025-10-18T18:24:10.027923806Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:24:10 embed-certs-213943 crio[649]: time="2025-10-18T18:24:10.028101483Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a20754eaa05d40c9cd62c858d6b1fb7a930d979ed875730d0583dcb90e0f24d0/merged/etc/passwd: no such file or directory"
	Oct 18 18:24:10 embed-certs-213943 crio[649]: time="2025-10-18T18:24:10.028125138Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a20754eaa05d40c9cd62c858d6b1fb7a930d979ed875730d0583dcb90e0f24d0/merged/etc/group: no such file or directory"
	Oct 18 18:24:10 embed-certs-213943 crio[649]: time="2025-10-18T18:24:10.028371Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:24:10 embed-certs-213943 crio[649]: time="2025-10-18T18:24:10.060176413Z" level=info msg="Created container 6ea0fc669c5a5aed268bc4f1b1959ec658c78291c197f49575e209481a5d2d96: kube-system/storage-provisioner/storage-provisioner" id=d89ba3d3-c0bd-45a4-a5eb-78dccdf037c5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:24:10 embed-certs-213943 crio[649]: time="2025-10-18T18:24:10.061114026Z" level=info msg="Starting container: 6ea0fc669c5a5aed268bc4f1b1959ec658c78291c197f49575e209481a5d2d96" id=224c2175-184e-4a30-b8f5-7293cbf89c49 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:24:10 embed-certs-213943 crio[649]: time="2025-10-18T18:24:10.063519414Z" level=info msg="Started container" PID=1639 containerID=6ea0fc669c5a5aed268bc4f1b1959ec658c78291c197f49575e209481a5d2d96 description=kube-system/storage-provisioner/storage-provisioner id=224c2175-184e-4a30-b8f5-7293cbf89c49 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c72a61929cd732ab0dea5ab47285a4323ef4ca0517453a4abbf4abb6b9ee1ec4
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.909983581Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.914454407Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.914617724Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.914707547Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.918618629Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.918766077Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.918842353Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.922135548Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.922269433Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.922337528Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.927718643Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.927896754Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.927974105Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.9310837Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:24:19 embed-certs-213943 crio[649]: time="2025-10-18T18:24:19.931225732Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	6ea0fc669c5a5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           30 seconds ago       Running             storage-provisioner         2                   c72a61929cd73       storage-provisioner                          kube-system
	b1cbdd377acf4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           33 seconds ago       Exited              dashboard-metrics-scraper   2                   ec7f93e23b19b       dashboard-metrics-scraper-6ffb444bf9-vn8f9   kubernetes-dashboard
	e9de5e570569b       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   48 seconds ago       Running             kubernetes-dashboard        0                   f7422b56b39a8       kubernetes-dashboard-855c9754f9-nmbd5        kubernetes-dashboard
	b3913cfee7fb2       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   bedb791b15a59       busybox                                      default
	c7e1739778739       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   c72a61929cd73       storage-provisioner                          kube-system
	630e3f457293e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   5db8954cad24f       kindnet-44fc8                                kube-system
	16aec3adc07ff       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   6d0a0d5a3b94e       coredns-66bc5c9577-grf2z                     kube-system
	39d6593c6d8d5       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   5a7d1d72602dc       kube-proxy-gcf8n                             kube-system
	97b7723e6cc93       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   1537da0dc601d       kube-apiserver-embed-certs-213943            kube-system
	9ae5471fee776       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   c24c7d62ddf7d       kube-controller-manager-embed-certs-213943   kube-system
	579b2e90159d3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   e0bf1ab11eafe       kube-scheduler-embed-certs-213943            kube-system
	320b2b6a0f723       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   6b6cedbf06b06       etcd-embed-certs-213943                      kube-system
	
	
	==> coredns [16aec3adc07fffcd5545d9bd12ca76fc45c9f92f49291dbfa7eb00de6d54c0ac] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38027 - 16333 "HINFO IN 6750387169992443311.6936307795065856942. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.056612977s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-213943
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-213943
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=embed-certs-213943
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T18_22_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 18:22:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-213943
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 18:24:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 18:24:08 +0000   Sat, 18 Oct 2025 18:22:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 18:24:08 +0000   Sat, 18 Oct 2025 18:22:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 18:24:08 +0000   Sat, 18 Oct 2025 18:22:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 18:24:08 +0000   Sat, 18 Oct 2025 18:22:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-213943
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                af083a40-edc0-4386-b2b1-7b1c8d51d4fc
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 coredns-66bc5c9577-grf2z                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m27s
	  kube-system                 etcd-embed-certs-213943                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m33s
	  kube-system                 kindnet-44fc8                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m27s
	  kube-system                 kube-apiserver-embed-certs-213943             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-controller-manager-embed-certs-213943    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-proxy-gcf8n                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-scheduler-embed-certs-213943             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vn8f9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-nmbd5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m25s                  kube-proxy       
	  Normal   Starting                 61s                    kube-proxy       
	  Normal   Starting                 2m40s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m40s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m39s (x8 over 2m39s)  kubelet          Node embed-certs-213943 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m39s (x8 over 2m39s)  kubelet          Node embed-certs-213943 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m39s (x8 over 2m39s)  kubelet          Node embed-certs-213943 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m32s                  kubelet          Node embed-certs-213943 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m32s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m32s                  kubelet          Node embed-certs-213943 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m32s                  kubelet          Node embed-certs-213943 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m32s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m28s                  node-controller  Node embed-certs-213943 event: Registered Node embed-certs-213943 in Controller
	  Normal   NodeReady                106s                   kubelet          Node embed-certs-213943 status is now: NodeReady
	  Normal   Starting                 70s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 70s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)      kubelet          Node embed-certs-213943 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)      kubelet          Node embed-certs-213943 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)      kubelet          Node embed-certs-213943 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           59s                    node-controller  Node embed-certs-213943 event: Registered Node embed-certs-213943 in Controller
	
	
	==> dmesg <==
	[Oct18 18:04] overlayfs: idmapped layers are currently not supported
	[ +24.403909] overlayfs: idmapped layers are currently not supported
	[  +6.162774] overlayfs: idmapped layers are currently not supported
	[Oct18 18:05] overlayfs: idmapped layers are currently not supported
	[ +25.128760] overlayfs: idmapped layers are currently not supported
	[Oct18 18:06] overlayfs: idmapped layers are currently not supported
	[Oct18 18:07] overlayfs: idmapped layers are currently not supported
	[Oct18 18:08] overlayfs: idmapped layers are currently not supported
	[Oct18 18:09] overlayfs: idmapped layers are currently not supported
	[Oct18 18:11] overlayfs: idmapped layers are currently not supported
	[Oct18 18:13] overlayfs: idmapped layers are currently not supported
	[ +30.969240] overlayfs: idmapped layers are currently not supported
	[Oct18 18:15] overlayfs: idmapped layers are currently not supported
	[Oct18 18:16] overlayfs: idmapped layers are currently not supported
	[Oct18 18:17] overlayfs: idmapped layers are currently not supported
	[ +23.167826] overlayfs: idmapped layers are currently not supported
	[Oct18 18:18] overlayfs: idmapped layers are currently not supported
	[ +38.509809] overlayfs: idmapped layers are currently not supported
	[Oct18 18:19] overlayfs: idmapped layers are currently not supported
	[Oct18 18:21] overlayfs: idmapped layers are currently not supported
	[Oct18 18:22] overlayfs: idmapped layers are currently not supported
	[Oct18 18:23] overlayfs: idmapped layers are currently not supported
	[ +30.822562] overlayfs: idmapped layers are currently not supported
	[Oct18 18:24] bpfilter: read fail -512
	[ +10.607871] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [320b2b6a0f723790bef132bc7d46d0c55becfa751e8cd836c15cde5c23b0446d] <==
	{"level":"warn","ts":"2025-10-18T18:23:35.723610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:35.754965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:35.797634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:35.816660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:35.849241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:35.879741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:35.898191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:35.927602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:35.992485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.038857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.081116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.101663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.140300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.177799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.213185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.265157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.321237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.345809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.386713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.429901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.542096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.561374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.636496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.656243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:23:36.736140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47656","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:24:41 up  2:07,  0 user,  load average: 2.89, 2.97, 2.76
	Linux embed-certs-213943 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [630e3f457293e1639be23c9cecc27705318c350d2ca0ae9fa75f375bfdf573c8] <==
	I1018 18:23:39.615867       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 18:23:39.616265       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 18:23:39.616437       1 main.go:148] setting mtu 1500 for CNI 
	I1018 18:23:39.616482       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 18:23:39.616520       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T18:23:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 18:23:39.909975       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 18:23:39.910001       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 18:23:39.910010       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 18:23:39.910130       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 18:24:09.907427       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 18:24:09.908584       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 18:24:09.910930       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 18:24:09.911044       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1018 18:24:11.211145       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 18:24:11.211269       1 metrics.go:72] Registering metrics
	I1018 18:24:11.211527       1 controller.go:711] "Syncing nftables rules"
	I1018 18:24:19.909020       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 18:24:19.909070       1 main.go:301] handling current node
	I1018 18:24:29.906765       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 18:24:29.906809       1 main.go:301] handling current node
	I1018 18:24:39.914688       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 18:24:39.914718       1 main.go:301] handling current node
	
	
	==> kube-apiserver [97b7723e6cc93259a63a7dc305c6dd7a4974876e6dc283507e6d8ce5af737bcb] <==
	I1018 18:23:37.978770       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 18:23:37.987292       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 18:23:37.987658       1 cache.go:39] Caches are synced for autoregister controller
	I1018 18:23:37.993348       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 18:23:37.999445       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 18:23:37.999554       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 18:23:38.005359       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 18:23:38.016749       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 18:23:38.017454       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 18:23:38.018151       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 18:23:38.019344       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 18:23:38.019466       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 18:23:38.019622       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1018 18:23:38.049382       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 18:23:38.702921       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 18:23:38.841385       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 18:23:39.013187       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 18:23:39.375880       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 18:23:39.481270       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 18:23:39.525940       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 18:23:39.826147       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.167.49"}
	I1018 18:23:39.848196       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.206.42"}
	I1018 18:23:42.239878       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 18:23:42.539683       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 18:23:42.588843       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9ae5471fee776db561d720631098bdc12432bd23b92d88eb2d07deb57fed51ac] <==
	I1018 18:23:42.161719       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 18:23:42.163443       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 18:23:42.174923       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 18:23:42.175018       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 18:23:42.175046       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 18:23:42.175459       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 18:23:42.175799       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 18:23:42.179495       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 18:23:42.182556       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 18:23:42.182669       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 18:23:42.182683       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 18:23:42.182702       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 18:23:42.182717       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 18:23:42.182728       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 18:23:42.187811       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 18:23:42.187956       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 18:23:42.188020       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 18:23:42.188070       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 18:23:42.191932       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 18:23:42.192052       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 18:23:42.192087       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 18:23:42.194481       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 18:23:42.194669       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 18:23:42.195258       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-213943"
	I1018 18:23:42.195377       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [39d6593c6d8d54c71c1c11426effcafa05b750b8b4e8c8f61eccd2fde32ca8ec] <==
	I1018 18:23:39.660800       1 server_linux.go:53] "Using iptables proxy"
	I1018 18:23:39.840484       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 18:23:39.945048       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 18:23:39.946374       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 18:23:39.946533       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 18:23:39.986656       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 18:23:39.986728       1 server_linux.go:132] "Using iptables Proxier"
	I1018 18:23:39.991323       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 18:23:39.991733       1 server.go:527] "Version info" version="v1.34.1"
	I1018 18:23:39.991758       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:23:39.993535       1 config.go:200] "Starting service config controller"
	I1018 18:23:39.993562       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 18:23:39.993579       1 config.go:106] "Starting endpoint slice config controller"
	I1018 18:23:39.993583       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 18:23:39.993594       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 18:23:39.993598       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 18:23:40.020121       1 config.go:309] "Starting node config controller"
	I1018 18:23:40.020146       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 18:23:40.020154       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 18:23:40.093875       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 18:23:40.093934       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 18:23:40.094002       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [579b2e90159d3f472f72b4d74cead642311dbb50b6aa56372bed6e44fa5f0026] <==
	I1018 18:23:36.282302       1 serving.go:386] Generated self-signed cert in-memory
	I1018 18:23:39.155189       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 18:23:39.155223       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:23:39.200691       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 18:23:39.200790       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 18:23:39.200813       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 18:23:39.200855       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 18:23:39.226337       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:23:39.226377       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:23:39.226398       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:23:39.226407       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:23:39.311447       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 18:23:39.327267       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:23:39.328115       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 18:23:42 embed-certs-213943 kubelet[777]: I1018 18:23:42.958132     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/94a45217-a2e0-4738-a1ba-b67ebd545bcf-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-vn8f9\" (UID: \"94a45217-a2e0-4738-a1ba-b67ebd545bcf\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn8f9"
	Oct 18 18:23:42 embed-certs-213943 kubelet[777]: I1018 18:23:42.958160     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rxdb\" (UniqueName: \"kubernetes.io/projected/94a45217-a2e0-4738-a1ba-b67ebd545bcf-kube-api-access-4rxdb\") pod \"dashboard-metrics-scraper-6ffb444bf9-vn8f9\" (UID: \"94a45217-a2e0-4738-a1ba-b67ebd545bcf\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn8f9"
	Oct 18 18:23:42 embed-certs-213943 kubelet[777]: I1018 18:23:42.958187     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfqtf\" (UniqueName: \"kubernetes.io/projected/0c6652bc-b6db-4827-91e8-190090a50541-kube-api-access-xfqtf\") pod \"kubernetes-dashboard-855c9754f9-nmbd5\" (UID: \"0c6652bc-b6db-4827-91e8-190090a50541\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nmbd5"
	Oct 18 18:23:43 embed-certs-213943 kubelet[777]: W1018 18:23:43.418689     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/f6d884df9095b5a97c2ba5df164207ee5c937524354408254d52ae7a929463c6/crio-f7422b56b39a8b8e2df04760b13976c8965e9db1c910e1dd260aa1eef5d4f402 WatchSource:0}: Error finding container f7422b56b39a8b8e2df04760b13976c8965e9db1c910e1dd260aa1eef5d4f402: Status 404 returned error can't find the container with id f7422b56b39a8b8e2df04760b13976c8965e9db1c910e1dd260aa1eef5d4f402
	Oct 18 18:23:47 embed-certs-213943 kubelet[777]: I1018 18:23:47.932476     777 scope.go:117] "RemoveContainer" containerID="68a722b442dce387f18e7e7cef708fd1ba3e349c7d4b12bd3d1eacb3ac296a37"
	Oct 18 18:23:48 embed-certs-213943 kubelet[777]: I1018 18:23:48.945543     777 scope.go:117] "RemoveContainer" containerID="fe68bf64cd0266dc4e20202680682a4d788bd9f315979e364f0db43903f7f49a"
	Oct 18 18:23:48 embed-certs-213943 kubelet[777]: E1018 18:23:48.945704     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vn8f9_kubernetes-dashboard(94a45217-a2e0-4738-a1ba-b67ebd545bcf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn8f9" podUID="94a45217-a2e0-4738-a1ba-b67ebd545bcf"
	Oct 18 18:23:48 embed-certs-213943 kubelet[777]: I1018 18:23:48.948663     777 scope.go:117] "RemoveContainer" containerID="68a722b442dce387f18e7e7cef708fd1ba3e349c7d4b12bd3d1eacb3ac296a37"
	Oct 18 18:23:49 embed-certs-213943 kubelet[777]: I1018 18:23:49.953318     777 scope.go:117] "RemoveContainer" containerID="fe68bf64cd0266dc4e20202680682a4d788bd9f315979e364f0db43903f7f49a"
	Oct 18 18:23:49 embed-certs-213943 kubelet[777]: E1018 18:23:49.953929     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vn8f9_kubernetes-dashboard(94a45217-a2e0-4738-a1ba-b67ebd545bcf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn8f9" podUID="94a45217-a2e0-4738-a1ba-b67ebd545bcf"
	Oct 18 18:23:53 embed-certs-213943 kubelet[777]: I1018 18:23:53.075119     777 scope.go:117] "RemoveContainer" containerID="fe68bf64cd0266dc4e20202680682a4d788bd9f315979e364f0db43903f7f49a"
	Oct 18 18:23:53 embed-certs-213943 kubelet[777]: E1018 18:23:53.075336     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vn8f9_kubernetes-dashboard(94a45217-a2e0-4738-a1ba-b67ebd545bcf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn8f9" podUID="94a45217-a2e0-4738-a1ba-b67ebd545bcf"
	Oct 18 18:24:07 embed-certs-213943 kubelet[777]: I1018 18:24:07.819003     777 scope.go:117] "RemoveContainer" containerID="fe68bf64cd0266dc4e20202680682a4d788bd9f315979e364f0db43903f7f49a"
	Oct 18 18:24:08 embed-certs-213943 kubelet[777]: I1018 18:24:08.001695     777 scope.go:117] "RemoveContainer" containerID="fe68bf64cd0266dc4e20202680682a4d788bd9f315979e364f0db43903f7f49a"
	Oct 18 18:24:08 embed-certs-213943 kubelet[777]: I1018 18:24:08.002154     777 scope.go:117] "RemoveContainer" containerID="b1cbdd377acf4ce0ba012efbe8a92d490f1cf26de33b65c0792311ca69b2f97d"
	Oct 18 18:24:08 embed-certs-213943 kubelet[777]: E1018 18:24:08.002865     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vn8f9_kubernetes-dashboard(94a45217-a2e0-4738-a1ba-b67ebd545bcf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn8f9" podUID="94a45217-a2e0-4738-a1ba-b67ebd545bcf"
	Oct 18 18:24:08 embed-certs-213943 kubelet[777]: I1018 18:24:08.056202     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nmbd5" podStartSLOduration=16.643287148 podStartE2EDuration="26.054462058s" podCreationTimestamp="2025-10-18 18:23:42 +0000 UTC" firstStartedPulling="2025-10-18 18:23:43.424571663 +0000 UTC m=+11.814421641" lastFinishedPulling="2025-10-18 18:23:52.835746573 +0000 UTC m=+21.225596551" observedRunningTime="2025-10-18 18:23:52.980625077 +0000 UTC m=+21.370475063" watchObservedRunningTime="2025-10-18 18:24:08.054462058 +0000 UTC m=+36.444312036"
	Oct 18 18:24:10 embed-certs-213943 kubelet[777]: I1018 18:24:10.014024     777 scope.go:117] "RemoveContainer" containerID="c7e17397787390cfe2e365edc60882b35fef038d500e72ed7964bce1242d4793"
	Oct 18 18:24:13 embed-certs-213943 kubelet[777]: I1018 18:24:13.075523     777 scope.go:117] "RemoveContainer" containerID="b1cbdd377acf4ce0ba012efbe8a92d490f1cf26de33b65c0792311ca69b2f97d"
	Oct 18 18:24:13 embed-certs-213943 kubelet[777]: E1018 18:24:13.076363     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vn8f9_kubernetes-dashboard(94a45217-a2e0-4738-a1ba-b67ebd545bcf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn8f9" podUID="94a45217-a2e0-4738-a1ba-b67ebd545bcf"
	Oct 18 18:24:24 embed-certs-213943 kubelet[777]: I1018 18:24:24.818913     777 scope.go:117] "RemoveContainer" containerID="b1cbdd377acf4ce0ba012efbe8a92d490f1cf26de33b65c0792311ca69b2f97d"
	Oct 18 18:24:24 embed-certs-213943 kubelet[777]: E1018 18:24:24.819090     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vn8f9_kubernetes-dashboard(94a45217-a2e0-4738-a1ba-b67ebd545bcf)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vn8f9" podUID="94a45217-a2e0-4738-a1ba-b67ebd545bcf"
	Oct 18 18:24:35 embed-certs-213943 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 18:24:35 embed-certs-213943 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 18:24:35 embed-certs-213943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e9de5e570569bf04ee4708a292c5a4963413811ea3989c2d9d52ea34af3ed27e] <==
	2025/10/18 18:23:52 Using namespace: kubernetes-dashboard
	2025/10/18 18:23:52 Using in-cluster config to connect to apiserver
	2025/10/18 18:23:52 Using secret token for csrf signing
	2025/10/18 18:23:52 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 18:23:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 18:23:52 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 18:23:52 Generating JWE encryption key
	2025/10/18 18:23:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 18:23:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 18:23:53 Initializing JWE encryption key from synchronized object
	2025/10/18 18:23:53 Creating in-cluster Sidecar client
	2025/10/18 18:23:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 18:23:53 Serving insecurely on HTTP port: 9090
	2025/10/18 18:24:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 18:23:52 Starting overwatch
	
	
	==> storage-provisioner [6ea0fc669c5a5aed268bc4f1b1959ec658c78291c197f49575e209481a5d2d96] <==
	W1018 18:24:10.099471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:13.554976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:17.815138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:21.414513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:24.468589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:27.490811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:27.499583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 18:24:27.499837       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 18:24:27.500044       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-213943_b322ac88-cd24-422e-8b88-68dd31ec1db6!
	I1018 18:24:27.504928       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"df0c2d90-b1dc-4b33-97ec-b51fa8382283", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-213943_b322ac88-cd24-422e-8b88-68dd31ec1db6 became leader
	W1018 18:24:27.505186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:27.515086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 18:24:27.606163       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-213943_b322ac88-cd24-422e-8b88-68dd31ec1db6!
	W1018 18:24:29.517832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:29.534669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:31.541539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:31.554621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:33.557471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:33.568620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:35.572843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:35.582516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:37.587546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:37.601906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:39.604568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:24:39.615934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c7e17397787390cfe2e365edc60882b35fef038d500e72ed7964bce1242d4793] <==
	I1018 18:23:39.760320       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 18:24:09.762444       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-213943 -n embed-certs-213943
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-213943 -n embed-certs-213943: exit status 2 (484.621859ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-213943 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-530891 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-530891 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (280.589834ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:25:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-530891 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-530891
helpers_test.go:243: (dbg) docker inspect newest-cni-530891:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e",
	        "Created": "2025-10-18T18:24:51.961915069Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 215735,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T18:24:52.049072883Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e/hostname",
	        "HostsPath": "/var/lib/docker/containers/592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e/hosts",
	        "LogPath": "/var/lib/docker/containers/592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e/592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e-json.log",
	        "Name": "/newest-cni-530891",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-530891:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-530891",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e",
	                "LowerDir": "/var/lib/docker/overlay2/6123219178d0089d945290d8d54993696ba0db05146a36d826c912c6a71dea18-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6123219178d0089d945290d8d54993696ba0db05146a36d826c912c6a71dea18/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6123219178d0089d945290d8d54993696ba0db05146a36d826c912c6a71dea18/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6123219178d0089d945290d8d54993696ba0db05146a36d826c912c6a71dea18/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-530891",
	                "Source": "/var/lib/docker/volumes/newest-cni-530891/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-530891",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-530891",
	                "name.minikube.sigs.k8s.io": "newest-cni-530891",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8beb83ded3a7a520c7bd2626645004042567d587dec55dacf53cf21aaef7ab58",
	            "SandboxKey": "/var/run/docker/netns/8beb83ded3a7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-530891": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:8c:91:cb:d8:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "430d25fc02daf5d96f95a7e706911a3c6ed05a1ed551d0fc6d07a2b7559606cd",
	                    "EndpointID": "3d58176503f1ed1c232c3b556aa90c358f4142ab9e218582a9a81f80db8c52b9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-530891",
	                        "592c46465c1a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-530891 -n newest-cni-530891
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-530891 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-530891 logs -n 25: (1.120688665s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-463770 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-463770       │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:21 UTC │
	│ delete  │ -p old-k8s-version-918475                                                                                                                                                                                                                     │ old-k8s-version-918475       │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:21 UTC │
	│ start   │ -p default-k8s-diff-port-192562 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:22 UTC │
	│ delete  │ -p cert-expiration-463770                                                                                                                                                                                                                     │ cert-expiration-463770       │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:21 UTC │
	│ start   │ -p embed-certs-213943 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-192562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-192562 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │ 18 Oct 25 18:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-192562 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │ 18 Oct 25 18:22 UTC │
	│ start   │ -p default-k8s-diff-port-192562 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │ 18 Oct 25 18:23 UTC │
	│ addons  │ enable metrics-server -p embed-certs-213943 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │                     │
	│ stop    │ -p embed-certs-213943 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │ 18 Oct 25 18:23 UTC │
	│ addons  │ enable dashboard -p embed-certs-213943 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │ 18 Oct 25 18:23 UTC │
	│ start   │ -p embed-certs-213943 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │ 18 Oct 25 18:24 UTC │
	│ image   │ default-k8s-diff-port-192562 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ pause   │ -p default-k8s-diff-port-192562 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-192562                                                                                                                                                                                                               │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ delete  │ -p default-k8s-diff-port-192562                                                                                                                                                                                                               │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ delete  │ -p disable-driver-mounts-747178                                                                                                                                                                                                               │ disable-driver-mounts-747178 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ start   │ -p no-preload-729957 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:25 UTC │
	│ image   │ embed-certs-213943 image list --format=json                                                                                                                                                                                                   │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ pause   │ -p embed-certs-213943 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │                     │
	│ delete  │ -p embed-certs-213943                                                                                                                                                                                                                         │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ delete  │ -p embed-certs-213943                                                                                                                                                                                                                         │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ start   │ -p newest-cni-530891 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:25 UTC │
	│ addons  │ enable metrics-server -p newest-cni-530891 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 18:24:45
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 18:24:45.932875  215342 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:24:45.933083  215342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:24:45.933106  215342 out.go:374] Setting ErrFile to fd 2...
	I1018 18:24:45.933125  215342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:24:45.933400  215342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:24:45.933823  215342 out.go:368] Setting JSON to false
	I1018 18:24:45.934723  215342 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7635,"bootTime":1760804251,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 18:24:45.934807  215342 start.go:141] virtualization:  
	I1018 18:24:45.940908  215342 out.go:179] * [newest-cni-530891] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 18:24:45.944292  215342 notify.go:220] Checking for updates...
	I1018 18:24:45.944894  215342 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 18:24:45.947949  215342 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 18:24:45.951253  215342 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:24:45.954244  215342 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 18:24:45.957079  215342 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 18:24:45.960292  215342 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 18:24:45.963749  215342 config.go:182] Loaded profile config "no-preload-729957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:24:45.963875  215342 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 18:24:46.015629  215342 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 18:24:46.015801  215342 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:24:46.138703  215342 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-18 18:24:46.126579734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:24:46.138809  215342 docker.go:318] overlay module found
	I1018 18:24:46.141923  215342 out.go:179] * Using the docker driver based on user configuration
	I1018 18:24:42.730436  211246 out.go:252]   - Booting up control plane ...
	I1018 18:24:42.730543  211246 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 18:24:42.730625  211246 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 18:24:42.732053  211246 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 18:24:42.758979  211246 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 18:24:42.759091  211246 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 18:24:42.773923  211246 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 18:24:42.774281  211246 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 18:24:42.774331  211246 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 18:24:42.986560  211246 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 18:24:42.986687  211246 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 18:24:43.494294  211246 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 506.537087ms
	I1018 18:24:43.503411  211246 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 18:24:43.503813  211246 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1018 18:24:43.504143  211246 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 18:24:43.506779  211246 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 18:24:46.144693  215342 start.go:305] selected driver: docker
	I1018 18:24:46.144706  215342 start.go:925] validating driver "docker" against <nil>
	I1018 18:24:46.144719  215342 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 18:24:46.145495  215342 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:24:46.255165  215342 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-18 18:24:46.242928815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:24:46.255316  215342 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1018 18:24:46.255344  215342 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1018 18:24:46.255580  215342 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 18:24:46.258752  215342 out.go:179] * Using Docker driver with root privileges
	I1018 18:24:46.261553  215342 cni.go:84] Creating CNI manager for ""
	I1018 18:24:46.261623  215342 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:24:46.261637  215342 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 18:24:46.261723  215342 start.go:349] cluster config:
	{Name:newest-cni-530891 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-530891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:24:46.266602  215342 out.go:179] * Starting "newest-cni-530891" primary control-plane node in "newest-cni-530891" cluster
	I1018 18:24:46.269544  215342 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 18:24:46.272492  215342 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 18:24:46.275195  215342 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:24:46.275261  215342 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 18:24:46.275276  215342 cache.go:58] Caching tarball of preloaded images
	I1018 18:24:46.275361  215342 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 18:24:46.275380  215342 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 18:24:46.275497  215342 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/config.json ...
	I1018 18:24:46.275520  215342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/config.json: {Name:mk5c50712877ef8c2e83788190119601f25e9ded Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:46.275690  215342 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 18:24:46.325532  215342 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 18:24:46.325552  215342 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 18:24:46.325573  215342 cache.go:232] Successfully downloaded all kic artifacts
	I1018 18:24:46.325595  215342 start.go:360] acquireMachinesLock for newest-cni-530891: {Name:mk0c4ba013544ae9a143d95908b1cd72d649cb51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:24:46.325708  215342 start.go:364] duration metric: took 98.709µs to acquireMachinesLock for "newest-cni-530891"
	I1018 18:24:46.325733  215342 start.go:93] Provisioning new machine with config: &{Name:newest-cni-530891 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-530891 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 18:24:46.325809  215342 start.go:125] createHost starting for "" (driver="docker")
	I1018 18:24:46.329254  215342 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 18:24:46.329501  215342 start.go:159] libmachine.API.Create for "newest-cni-530891" (driver="docker")
	I1018 18:24:46.329548  215342 client.go:168] LocalClient.Create starting
	I1018 18:24:46.329620  215342 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem
	I1018 18:24:46.329655  215342 main.go:141] libmachine: Decoding PEM data...
	I1018 18:24:46.329668  215342 main.go:141] libmachine: Parsing certificate...
	I1018 18:24:46.329724  215342 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem
	I1018 18:24:46.329740  215342 main.go:141] libmachine: Decoding PEM data...
	I1018 18:24:46.329754  215342 main.go:141] libmachine: Parsing certificate...
	I1018 18:24:46.330107  215342 cli_runner.go:164] Run: docker network inspect newest-cni-530891 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 18:24:46.354062  215342 cli_runner.go:211] docker network inspect newest-cni-530891 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 18:24:46.354142  215342 network_create.go:284] running [docker network inspect newest-cni-530891] to gather additional debugging logs...
	I1018 18:24:46.354163  215342 cli_runner.go:164] Run: docker network inspect newest-cni-530891
	W1018 18:24:46.380902  215342 cli_runner.go:211] docker network inspect newest-cni-530891 returned with exit code 1
	I1018 18:24:46.380947  215342 network_create.go:287] error running [docker network inspect newest-cni-530891]: docker network inspect newest-cni-530891: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-530891 not found
	I1018 18:24:46.380961  215342 network_create.go:289] output of [docker network inspect newest-cni-530891]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-530891 not found
	
	** /stderr **
	I1018 18:24:46.381070  215342 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:24:46.410899  215342 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-903568cdf824 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:7a:80:c0:8c:ed} reservation:<nil>}
	I1018 18:24:46.411239  215342 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ee9fcaab9ca8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:a7:65:1b:c0:41} reservation:<nil>}
	I1018 18:24:46.411568  215342 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-414fc11e154b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:86:f0:a8:1a:86:00} reservation:<nil>}
	I1018 18:24:46.411832  215342 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-9171cfee9247 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e6:21:8a:96:2d:4e} reservation:<nil>}
	I1018 18:24:46.412240  215342 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a23510}
	I1018 18:24:46.412266  215342 network_create.go:124] attempt to create docker network newest-cni-530891 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 18:24:46.412324  215342 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-530891 newest-cni-530891
	I1018 18:24:46.507412  215342 network_create.go:108] docker network newest-cni-530891 192.168.85.0/24 created
	I1018 18:24:46.507449  215342 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-530891" container
	I1018 18:24:46.507539  215342 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 18:24:46.557380  215342 cli_runner.go:164] Run: docker volume create newest-cni-530891 --label name.minikube.sigs.k8s.io=newest-cni-530891 --label created_by.minikube.sigs.k8s.io=true
	I1018 18:24:46.588225  215342 oci.go:103] Successfully created a docker volume newest-cni-530891
	I1018 18:24:46.588322  215342 cli_runner.go:164] Run: docker run --rm --name newest-cni-530891-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-530891 --entrypoint /usr/bin/test -v newest-cni-530891:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 18:24:47.355861  215342 oci.go:107] Successfully prepared a docker volume newest-cni-530891
	I1018 18:24:47.355903  215342 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:24:47.355922  215342 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 18:24:47.355983  215342 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-530891:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 18:24:47.791431  211246 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.283724914s
	I1018 18:24:51.110185  211246 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.602905366s
	I1018 18:24:53.511136  211246 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.003651368s
	I1018 18:24:53.617528  211246 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 18:24:53.656162  211246 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 18:24:53.685048  211246 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 18:24:53.685495  211246 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-729957 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 18:24:53.711832  211246 kubeadm.go:318] [bootstrap-token] Using token: 1orxdi.912wtn2m7d6gr5u8
	I1018 18:24:53.714768  211246 out.go:252]   - Configuring RBAC rules ...
	I1018 18:24:53.714896  211246 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 18:24:53.743390  211246 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 18:24:53.771764  211246 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 18:24:53.781347  211246 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 18:24:53.792076  211246 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 18:24:53.805584  211246 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 18:24:53.931799  211246 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 18:24:54.570379  211246 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 18:24:54.922190  211246 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 18:24:54.923347  211246 kubeadm.go:318] 
	I1018 18:24:54.923452  211246 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 18:24:54.923459  211246 kubeadm.go:318] 
	I1018 18:24:54.923540  211246 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 18:24:54.923545  211246 kubeadm.go:318] 
	I1018 18:24:54.923572  211246 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 18:24:54.923641  211246 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 18:24:54.923694  211246 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 18:24:54.923699  211246 kubeadm.go:318] 
	I1018 18:24:54.923756  211246 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 18:24:54.923760  211246 kubeadm.go:318] 
	I1018 18:24:54.923810  211246 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 18:24:54.923815  211246 kubeadm.go:318] 
	I1018 18:24:54.923869  211246 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 18:24:54.923948  211246 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 18:24:54.924019  211246 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 18:24:54.924024  211246 kubeadm.go:318] 
	I1018 18:24:54.924112  211246 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 18:24:54.924196  211246 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 18:24:54.924201  211246 kubeadm.go:318] 
	I1018 18:24:54.924293  211246 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 1orxdi.912wtn2m7d6gr5u8 \
	I1018 18:24:54.924401  211246 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d0244c5bf86cdf97546c6a22045cb6ed9d7ead524d9c98d9ca35da77d5d7a04d \
	I1018 18:24:54.924422  211246 kubeadm.go:318] 	--control-plane 
	I1018 18:24:54.924427  211246 kubeadm.go:318] 
	I1018 18:24:54.924515  211246 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 18:24:54.924519  211246 kubeadm.go:318] 
	I1018 18:24:54.924604  211246 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 1orxdi.912wtn2m7d6gr5u8 \
	I1018 18:24:54.924719  211246 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d0244c5bf86cdf97546c6a22045cb6ed9d7ead524d9c98d9ca35da77d5d7a04d 
	I1018 18:24:54.931528  211246 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 18:24:54.931767  211246 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 18:24:54.931876  211246 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 18:24:54.931954  211246 cni.go:84] Creating CNI manager for ""
	I1018 18:24:54.931965  211246 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:24:54.935698  211246 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 18:24:51.859966  215342 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-530891:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.503943282s)
	I1018 18:24:51.859995  215342 kic.go:203] duration metric: took 4.504070151s to extract preloaded images to volume ...
	W1018 18:24:51.860161  215342 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 18:24:51.860262  215342 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 18:24:51.944064  215342 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-530891 --name newest-cni-530891 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-530891 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-530891 --network newest-cni-530891 --ip 192.168.85.2 --volume newest-cni-530891:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 18:24:52.388897  215342 cli_runner.go:164] Run: docker container inspect newest-cni-530891 --format={{.State.Running}}
	I1018 18:24:52.414787  215342 cli_runner.go:164] Run: docker container inspect newest-cni-530891 --format={{.State.Status}}
	I1018 18:24:52.442620  215342 cli_runner.go:164] Run: docker exec newest-cni-530891 stat /var/lib/dpkg/alternatives/iptables
	I1018 18:24:52.510078  215342 oci.go:144] the created container "newest-cni-530891" has a running status.
	I1018 18:24:52.510112  215342 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/newest-cni-530891/id_rsa...
	I1018 18:24:54.758513  215342 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-2509/.minikube/machines/newest-cni-530891/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 18:24:54.785747  215342 cli_runner.go:164] Run: docker container inspect newest-cni-530891 --format={{.State.Status}}
	I1018 18:24:54.803882  215342 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 18:24:54.803907  215342 kic_runner.go:114] Args: [docker exec --privileged newest-cni-530891 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 18:24:54.886757  215342 cli_runner.go:164] Run: docker container inspect newest-cni-530891 --format={{.State.Status}}
	I1018 18:24:54.917955  215342 machine.go:93] provisionDockerMachine start ...
	I1018 18:24:54.918087  215342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-530891
	I1018 18:24:54.950227  215342 main.go:141] libmachine: Using SSH client type: native
	I1018 18:24:54.950595  215342 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1018 18:24:54.950615  215342 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 18:24:55.121041  215342 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-530891
	
	I1018 18:24:55.121118  215342 ubuntu.go:182] provisioning hostname "newest-cni-530891"
	I1018 18:24:55.121215  215342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-530891
	I1018 18:24:55.153287  215342 main.go:141] libmachine: Using SSH client type: native
	I1018 18:24:55.153591  215342 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1018 18:24:55.153603  215342 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-530891 && echo "newest-cni-530891" | sudo tee /etc/hostname
	I1018 18:24:55.339475  215342 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-530891
	
	I1018 18:24:55.339581  215342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-530891
	I1018 18:24:55.366931  215342 main.go:141] libmachine: Using SSH client type: native
	I1018 18:24:55.367246  215342 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1018 18:24:55.367274  215342 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-530891' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-530891/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-530891' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 18:24:55.526439  215342 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 18:24:55.526468  215342 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 18:24:55.526537  215342 ubuntu.go:190] setting up certificates
	I1018 18:24:55.526554  215342 provision.go:84] configureAuth start
	I1018 18:24:55.526632  215342 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-530891
	I1018 18:24:55.553249  215342 provision.go:143] copyHostCerts
	I1018 18:24:55.553319  215342 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 18:24:55.553333  215342 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 18:24:55.553422  215342 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 18:24:55.553521  215342 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 18:24:55.553531  215342 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 18:24:55.553558  215342 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 18:24:55.553613  215342 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 18:24:55.553627  215342 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 18:24:55.553651  215342 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 18:24:55.553700  215342 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.newest-cni-530891 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-530891]
	I1018 18:24:54.938776  211246 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 18:24:54.946051  211246 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 18:24:54.946069  211246 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 18:24:54.986484  211246 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 18:24:55.486636  211246 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 18:24:55.486784  211246 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:24:55.486866  211246 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-729957 minikube.k8s.io/updated_at=2025_10_18T18_24_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404 minikube.k8s.io/name=no-preload-729957 minikube.k8s.io/primary=true
	I1018 18:24:55.792570  211246 ops.go:34] apiserver oom_adj: -16
	I1018 18:24:55.792713  211246 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:24:56.523844  215342 provision.go:177] copyRemoteCerts
	I1018 18:24:56.523917  215342 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 18:24:56.523986  215342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-530891
	I1018 18:24:56.543208  215342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/newest-cni-530891/id_rsa Username:docker}
	I1018 18:24:56.653678  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 18:24:56.673832  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 18:24:56.695486  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 18:24:56.716300  215342 provision.go:87] duration metric: took 1.189723873s to configureAuth
	I1018 18:24:56.716325  215342 ubuntu.go:206] setting minikube options for container-runtime
	I1018 18:24:56.716526  215342 config.go:182] Loaded profile config "newest-cni-530891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:24:56.716639  215342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-530891
	I1018 18:24:56.734455  215342 main.go:141] libmachine: Using SSH client type: native
	I1018 18:24:56.734755  215342 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1018 18:24:56.734774  215342 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 18:24:57.028752  215342 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 18:24:57.028782  215342 machine.go:96] duration metric: took 2.110790442s to provisionDockerMachine
	I1018 18:24:57.028792  215342 client.go:171] duration metric: took 10.699237032s to LocalClient.Create
	I1018 18:24:57.028806  215342 start.go:167] duration metric: took 10.69930562s to libmachine.API.Create "newest-cni-530891"
	I1018 18:24:57.028813  215342 start.go:293] postStartSetup for "newest-cni-530891" (driver="docker")
	I1018 18:24:57.028823  215342 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 18:24:57.028899  215342 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 18:24:57.028975  215342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-530891
	I1018 18:24:57.049346  215342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/newest-cni-530891/id_rsa Username:docker}
	I1018 18:24:57.157817  215342 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 18:24:57.161275  215342 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 18:24:57.161306  215342 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 18:24:57.161321  215342 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 18:24:57.161377  215342 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 18:24:57.161460  215342 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 18:24:57.161565  215342 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 18:24:57.169000  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:24:57.186452  215342 start.go:296] duration metric: took 157.624402ms for postStartSetup
	I1018 18:24:57.186812  215342 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-530891
	I1018 18:24:57.205068  215342 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/config.json ...
	I1018 18:24:57.205361  215342 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 18:24:57.205427  215342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-530891
	I1018 18:24:57.223454  215342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/newest-cni-530891/id_rsa Username:docker}
	I1018 18:24:57.327080  215342 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 18:24:57.332503  215342 start.go:128] duration metric: took 11.006678228s to createHost
	I1018 18:24:57.332568  215342 start.go:83] releasing machines lock for "newest-cni-530891", held for 11.006849988s
	I1018 18:24:57.332665  215342 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-530891
	I1018 18:24:57.353506  215342 ssh_runner.go:195] Run: cat /version.json
	I1018 18:24:57.353555  215342 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 18:24:57.353618  215342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-530891
	I1018 18:24:57.353561  215342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-530891
	I1018 18:24:57.402424  215342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/newest-cni-530891/id_rsa Username:docker}
	I1018 18:24:57.414590  215342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/newest-cni-530891/id_rsa Username:docker}
	I1018 18:24:57.621798  215342 ssh_runner.go:195] Run: systemctl --version
	I1018 18:24:57.628577  215342 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 18:24:57.666794  215342 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 18:24:57.671441  215342 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 18:24:57.671531  215342 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 18:24:57.704094  215342 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 18:24:57.704129  215342 start.go:495] detecting cgroup driver to use...
	I1018 18:24:57.704178  215342 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 18:24:57.704254  215342 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 18:24:57.723105  215342 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 18:24:57.736738  215342 docker.go:218] disabling cri-docker service (if available) ...
	I1018 18:24:57.736808  215342 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 18:24:57.755446  215342 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 18:24:57.776315  215342 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 18:24:57.937793  215342 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 18:24:58.093115  215342 docker.go:234] disabling docker service ...
	I1018 18:24:58.093235  215342 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 18:24:58.118016  215342 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 18:24:58.132556  215342 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 18:24:58.260457  215342 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 18:24:58.421965  215342 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 18:24:58.436494  215342 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 18:24:58.457029  215342 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 18:24:58.457135  215342 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:58.489681  215342 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 18:24:58.489787  215342 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:58.503613  215342 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:58.516829  215342 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:58.532562  215342 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 18:24:58.548395  215342 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:58.564511  215342 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:58.590389  215342 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:58.609218  215342 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 18:24:58.622461  215342 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 18:24:58.635883  215342 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:24:58.766382  215342 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 18:24:58.923839  215342 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 18:24:58.923950  215342 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 18:24:58.928370  215342 start.go:563] Will wait 60s for crictl version
	I1018 18:24:58.928464  215342 ssh_runner.go:195] Run: which crictl
	I1018 18:24:58.932009  215342 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 18:24:58.960459  215342 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 18:24:58.960606  215342 ssh_runner.go:195] Run: crio --version
	I1018 18:24:58.989973  215342 ssh_runner.go:195] Run: crio --version
	I1018 18:24:59.027520  215342 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 18:24:59.030427  215342 cli_runner.go:164] Run: docker network inspect newest-cni-530891 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:24:59.049026  215342 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 18:24:59.053584  215342 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:24:59.066670  215342 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 18:24:56.293340  211246 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:24:56.793646  211246 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:24:57.293061  211246 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:24:57.793100  211246 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:24:58.293712  211246 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:24:58.792811  211246 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:24:59.293138  211246 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:24:59.498371  211246 kubeadm.go:1113] duration metric: took 4.011641283s to wait for elevateKubeSystemPrivileges
	I1018 18:24:59.498399  211246 kubeadm.go:402] duration metric: took 24.387075148s to StartCluster
	I1018 18:24:59.498431  211246 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:59.498489  211246 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:24:59.499116  211246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:59.499328  211246 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 18:24:59.499425  211246 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 18:24:59.499640  211246 config.go:182] Loaded profile config "no-preload-729957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:24:59.499670  211246 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 18:24:59.499726  211246 addons.go:69] Setting storage-provisioner=true in profile "no-preload-729957"
	I1018 18:24:59.499740  211246 addons.go:238] Setting addon storage-provisioner=true in "no-preload-729957"
	I1018 18:24:59.499760  211246 host.go:66] Checking if "no-preload-729957" exists ...
	I1018 18:24:59.500236  211246 cli_runner.go:164] Run: docker container inspect no-preload-729957 --format={{.State.Status}}
	I1018 18:24:59.500651  211246 addons.go:69] Setting default-storageclass=true in profile "no-preload-729957"
	I1018 18:24:59.500674  211246 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-729957"
	I1018 18:24:59.500977  211246 cli_runner.go:164] Run: docker container inspect no-preload-729957 --format={{.State.Status}}
	I1018 18:24:59.503260  211246 out.go:179] * Verifying Kubernetes components...
	I1018 18:24:59.506966  211246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:24:59.535547  211246 addons.go:238] Setting addon default-storageclass=true in "no-preload-729957"
	I1018 18:24:59.535585  211246 host.go:66] Checking if "no-preload-729957" exists ...
	I1018 18:24:59.535992  211246 cli_runner.go:164] Run: docker container inspect no-preload-729957 --format={{.State.Status}}
	I1018 18:24:59.544152  211246 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:24:59.069396  215342 kubeadm.go:883] updating cluster {Name:newest-cni-530891 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-530891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 18:24:59.069559  215342 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:24:59.069660  215342 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:24:59.105741  215342 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:24:59.105764  215342 crio.go:433] Images already preloaded, skipping extraction
	I1018 18:24:59.105822  215342 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:24:59.130534  215342 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:24:59.130557  215342 cache_images.go:85] Images are preloaded, skipping loading
	I1018 18:24:59.130565  215342 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 18:24:59.130650  215342 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-530891 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-530891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 18:24:59.130735  215342 ssh_runner.go:195] Run: crio config
	I1018 18:24:59.238064  215342 cni.go:84] Creating CNI manager for ""
	I1018 18:24:59.238136  215342 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:24:59.238166  215342 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 18:24:59.238224  215342 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-530891 NodeName:newest-cni-530891 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 18:24:59.238393  215342 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-530891"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 18:24:59.238496  215342 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 18:24:59.246986  215342 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 18:24:59.247101  215342 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 18:24:59.255010  215342 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 18:24:59.270921  215342 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 18:24:59.283902  215342 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1018 18:24:59.303733  215342 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 18:24:59.308014  215342 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:24:59.320818  215342 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:24:59.541695  215342 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:24:59.581016  215342 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891 for IP: 192.168.85.2
	I1018 18:24:59.581035  215342 certs.go:195] generating shared ca certs ...
	I1018 18:24:59.581057  215342 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:59.581194  215342 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 18:24:59.581235  215342 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 18:24:59.581242  215342 certs.go:257] generating profile certs ...
	I1018 18:24:59.581304  215342 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/client.key
	I1018 18:24:59.581314  215342 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/client.crt with IP's: []
	I1018 18:24:59.775607  215342 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/client.crt ...
	I1018 18:24:59.775644  215342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/client.crt: {Name:mkfb2b01a029c3ec7d8b39650689a2841c96b5f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:59.775826  215342 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/client.key ...
	I1018 18:24:59.775841  215342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/client.key: {Name:mkf199f78dd53c80a75b60e7356f06520d4d7edc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:59.775923  215342 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/apiserver.key.41f4075b
	I1018 18:24:59.775942  215342 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/apiserver.crt.41f4075b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1018 18:25:00.329130  215342 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/apiserver.crt.41f4075b ...
	I1018 18:25:00.329167  215342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/apiserver.crt.41f4075b: {Name:mkf365ffd4125c8bbfe53ccf847577d844693ef1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:25:00.329365  215342 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/apiserver.key.41f4075b ...
	I1018 18:25:00.329524  215342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/apiserver.key.41f4075b: {Name:mk72bdf1046f6c61552d02a1873a3f73ba03738f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:25:00.329656  215342 certs.go:382] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/apiserver.crt.41f4075b -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/apiserver.crt
	I1018 18:25:00.329762  215342 certs.go:386] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/apiserver.key.41f4075b -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/apiserver.key
	I1018 18:25:00.329878  215342 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/proxy-client.key
	I1018 18:25:00.329906  215342 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/proxy-client.crt with IP's: []
	I1018 18:25:00.660039  215342 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/proxy-client.crt ...
	I1018 18:25:00.660114  215342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/proxy-client.crt: {Name:mk536eef0d60632d58f25f2d2097a2e43686c535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:25:00.660387  215342 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/proxy-client.key ...
	I1018 18:25:00.660432  215342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/proxy-client.key: {Name:mk00310e4828ea5b36061eb09117b8b053f89c9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:25:00.660704  215342 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 18:25:00.660828  215342 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 18:25:00.660862  215342 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 18:25:00.660911  215342 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 18:25:00.661627  215342 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 18:25:00.661695  215342 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 18:25:00.661801  215342 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:25:00.662482  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 18:25:00.686788  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 18:25:00.709540  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 18:25:00.743342  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 18:25:00.781312  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 18:25:00.812628  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 18:25:00.838335  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 18:25:00.865025  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 18:25:00.898136  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 18:25:00.922470  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 18:24:59.547805  211246 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:24:59.547826  211246 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 18:24:59.547890  211246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:24:59.577806  211246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa Username:docker}
	I1018 18:24:59.616346  211246 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 18:24:59.616375  211246 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 18:24:59.616440  211246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:24:59.701348  211246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa Username:docker}
	I1018 18:25:00.335110  211246 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 18:25:00.403928  211246 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:25:00.404218  211246 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 18:25:00.545349  211246 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:25:00.957364  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 18:25:00.982181  215342 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 18:25:01.002053  215342 ssh_runner.go:195] Run: openssl version
	I1018 18:25:01.012526  215342 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 18:25:01.023410  215342 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:25:01.030403  215342 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:25:01.030518  215342 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:25:01.080814  215342 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 18:25:01.095942  215342 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 18:25:01.106445  215342 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 18:25:01.113684  215342 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 18:25:01.113805  215342 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 18:25:01.163270  215342 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 18:25:01.173116  215342 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 18:25:01.183722  215342 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 18:25:01.190227  215342 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 18:25:01.190397  215342 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 18:25:01.237658  215342 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 18:25:01.247340  215342 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 18:25:01.254008  215342 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 18:25:01.254122  215342 kubeadm.go:400] StartCluster: {Name:newest-cni-530891 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-530891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:25:01.254281  215342 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 18:25:01.254378  215342 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 18:25:01.305658  215342 cri.go:89] found id: ""
	I1018 18:25:01.305786  215342 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 18:25:01.317661  215342 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 18:25:01.333654  215342 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 18:25:01.333765  215342 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 18:25:01.347311  215342 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 18:25:01.347382  215342 kubeadm.go:157] found existing configuration files:
	
	I1018 18:25:01.347467  215342 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 18:25:01.358918  215342 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 18:25:01.359028  215342 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 18:25:01.371893  215342 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 18:25:01.387152  215342 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 18:25:01.387273  215342 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 18:25:01.400682  215342 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 18:25:01.414935  215342 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 18:25:01.415052  215342 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 18:25:01.429061  215342 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 18:25:01.447607  215342 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 18:25:01.447726  215342 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 18:25:01.463674  215342 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 18:25:01.611790  215342 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 18:25:01.618165  215342 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 18:25:01.675624  215342 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 18:25:01.675787  215342 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 18:25:01.675859  215342 kubeadm.go:318] OS: Linux
	I1018 18:25:01.675969  215342 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 18:25:01.676044  215342 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 18:25:01.676127  215342 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 18:25:01.676213  215342 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 18:25:01.676296  215342 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 18:25:01.676378  215342 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 18:25:01.676459  215342 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 18:25:01.676542  215342 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 18:25:01.676647  215342 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 18:25:01.821917  215342 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 18:25:01.822089  215342 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 18:25:01.822219  215342 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 18:25:01.843815  215342 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 18:25:02.085868  211246 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.750659743s)
	I1018 18:25:02.274554  211246 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.870285995s)
	I1018 18:25:02.274593  211246 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1018 18:25:02.275680  211246 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.871580042s)
	I1018 18:25:02.276438  211246 node_ready.go:35] waiting up to 6m0s for node "no-preload-729957" to be "Ready" ...
	I1018 18:25:02.794828  211246 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-729957" context rescaled to 1 replicas
	I1018 18:25:02.856762  211246 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.311374127s)
	I1018 18:25:02.860011  211246 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1018 18:25:01.847885  215342 out.go:252]   - Generating certificates and keys ...
	I1018 18:25:01.847982  215342 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 18:25:01.848059  215342 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 18:25:02.147067  215342 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 18:25:02.728274  215342 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 18:25:02.912859  215342 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 18:25:03.892952  215342 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 18:25:04.263331  215342 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 18:25:04.263952  215342 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-530891] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 18:25:04.473330  215342 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 18:25:04.473949  215342 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-530891] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 18:25:04.816786  215342 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 18:25:05.522094  215342 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 18:25:02.862939  211246 addons.go:514] duration metric: took 3.363237374s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1018 18:25:04.279583  211246 node_ready.go:57] node "no-preload-729957" has "Ready":"False" status (will retry)
	I1018 18:25:05.974571  215342 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 18:25:05.974984  215342 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 18:25:06.723520  215342 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 18:25:08.186598  215342 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 18:25:08.429335  215342 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 18:25:08.643204  215342 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 18:25:09.054377  215342 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 18:25:09.055075  215342 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 18:25:09.059957  215342 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 18:25:09.063476  215342 out.go:252]   - Booting up control plane ...
	I1018 18:25:09.063581  215342 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 18:25:09.063667  215342 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 18:25:09.064325  215342 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 18:25:09.079778  215342 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 18:25:09.079895  215342 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 18:25:09.087313  215342 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 18:25:09.087765  215342 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 18:25:09.087998  215342 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 18:25:09.221506  215342 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 18:25:09.221634  215342 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1018 18:25:06.280922  211246 node_ready.go:57] node "no-preload-729957" has "Ready":"False" status (will retry)
	W1018 18:25:08.780474  211246 node_ready.go:57] node "no-preload-729957" has "Ready":"False" status (will retry)
	I1018 18:25:11.221434  215342 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.000790529s
	I1018 18:25:11.224741  215342 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 18:25:11.224841  215342 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1018 18:25:11.225181  215342 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 18:25:11.225277  215342 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 18:25:13.889564  215342 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.664362067s
	W1018 18:25:11.279934  211246 node_ready.go:57] node "no-preload-729957" has "Ready":"False" status (will retry)
	W1018 18:25:13.779444  211246 node_ready.go:57] node "no-preload-729957" has "Ready":"False" status (will retry)
	W1018 18:25:15.780007  211246 node_ready.go:57] node "no-preload-729957" has "Ready":"False" status (will retry)
	I1018 18:25:17.476339  215342 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.251561218s
	I1018 18:25:17.727118  215342 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502142394s
	I1018 18:25:17.748931  215342 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 18:25:17.768324  215342 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 18:25:17.779358  215342 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 18:25:17.779564  215342 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-530891 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 18:25:17.792355  215342 kubeadm.go:318] [bootstrap-token] Using token: 5rfsqk.2p7sczsef9jhpde8
	I1018 18:25:16.281790  211246 node_ready.go:49] node "no-preload-729957" is "Ready"
	I1018 18:25:16.281820  211246 node_ready.go:38] duration metric: took 14.005358073s for node "no-preload-729957" to be "Ready" ...
	I1018 18:25:16.281833  211246 api_server.go:52] waiting for apiserver process to appear ...
	I1018 18:25:16.281895  211246 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 18:25:16.316789  211246 api_server.go:72] duration metric: took 16.817433135s to wait for apiserver process to appear ...
	I1018 18:25:16.316812  211246 api_server.go:88] waiting for apiserver healthz status ...
	I1018 18:25:16.316830  211246 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 18:25:16.329010  211246 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 18:25:16.330363  211246 api_server.go:141] control plane version: v1.34.1
	I1018 18:25:16.330384  211246 api_server.go:131] duration metric: took 13.565549ms to wait for apiserver health ...
	I1018 18:25:16.330393  211246 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 18:25:16.337413  211246 system_pods.go:59] 8 kube-system pods found
	I1018 18:25:16.337445  211246 system_pods.go:61] "coredns-66bc5c9577-q7mng" [365b51ac-c2aa-4247-a37e-ef5ce5d54a36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:25:16.337452  211246 system_pods.go:61] "etcd-no-preload-729957" [29023f58-84ea-44ad-b6e8-cc5cf720a4be] Running
	I1018 18:25:16.337458  211246 system_pods.go:61] "kindnet-4hbt7" [6c9fa05f-7c37-442d-b3fa-ee037c865d3e] Running
	I1018 18:25:16.337463  211246 system_pods.go:61] "kube-apiserver-no-preload-729957" [ea721a8e-b407-4422-b1c1-dc40032787ee] Running
	I1018 18:25:16.337468  211246 system_pods.go:61] "kube-controller-manager-no-preload-729957" [bf889e9e-777e-403a-b4ef-3582a86bafbb] Running
	I1018 18:25:16.337472  211246 system_pods.go:61] "kube-proxy-75znn" [c6f7e4f1-ccc0-40c5-b449-fb42e743f373] Running
	I1018 18:25:16.337477  211246 system_pods.go:61] "kube-scheduler-no-preload-729957" [fa436526-c2f9-43b9-a48e-57dc63916082] Running
	I1018 18:25:16.337484  211246 system_pods.go:61] "storage-provisioner" [4bef6a17-c67c-4394-837e-c20c6378a6ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 18:25:16.337490  211246 system_pods.go:74] duration metric: took 7.091907ms to wait for pod list to return data ...
	I1018 18:25:16.337497  211246 default_sa.go:34] waiting for default service account to be created ...
	I1018 18:25:16.341174  211246 default_sa.go:45] found service account: "default"
	I1018 18:25:16.341249  211246 default_sa.go:55] duration metric: took 3.744852ms for default service account to be created ...
	I1018 18:25:16.341260  211246 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 18:25:16.344812  211246 system_pods.go:86] 8 kube-system pods found
	I1018 18:25:16.344842  211246 system_pods.go:89] "coredns-66bc5c9577-q7mng" [365b51ac-c2aa-4247-a37e-ef5ce5d54a36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:25:16.344848  211246 system_pods.go:89] "etcd-no-preload-729957" [29023f58-84ea-44ad-b6e8-cc5cf720a4be] Running
	I1018 18:25:16.344854  211246 system_pods.go:89] "kindnet-4hbt7" [6c9fa05f-7c37-442d-b3fa-ee037c865d3e] Running
	I1018 18:25:16.344859  211246 system_pods.go:89] "kube-apiserver-no-preload-729957" [ea721a8e-b407-4422-b1c1-dc40032787ee] Running
	I1018 18:25:16.344863  211246 system_pods.go:89] "kube-controller-manager-no-preload-729957" [bf889e9e-777e-403a-b4ef-3582a86bafbb] Running
	I1018 18:25:16.344867  211246 system_pods.go:89] "kube-proxy-75znn" [c6f7e4f1-ccc0-40c5-b449-fb42e743f373] Running
	I1018 18:25:16.344871  211246 system_pods.go:89] "kube-scheduler-no-preload-729957" [fa436526-c2f9-43b9-a48e-57dc63916082] Running
	I1018 18:25:16.344878  211246 system_pods.go:89] "storage-provisioner" [4bef6a17-c67c-4394-837e-c20c6378a6ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 18:25:16.344911  211246 retry.go:31] will retry after 310.808913ms: missing components: kube-dns
	I1018 18:25:16.660977  211246 system_pods.go:86] 8 kube-system pods found
	I1018 18:25:16.661009  211246 system_pods.go:89] "coredns-66bc5c9577-q7mng" [365b51ac-c2aa-4247-a37e-ef5ce5d54a36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:25:16.661017  211246 system_pods.go:89] "etcd-no-preload-729957" [29023f58-84ea-44ad-b6e8-cc5cf720a4be] Running
	I1018 18:25:16.661023  211246 system_pods.go:89] "kindnet-4hbt7" [6c9fa05f-7c37-442d-b3fa-ee037c865d3e] Running
	I1018 18:25:16.661027  211246 system_pods.go:89] "kube-apiserver-no-preload-729957" [ea721a8e-b407-4422-b1c1-dc40032787ee] Running
	I1018 18:25:16.661032  211246 system_pods.go:89] "kube-controller-manager-no-preload-729957" [bf889e9e-777e-403a-b4ef-3582a86bafbb] Running
	I1018 18:25:16.661036  211246 system_pods.go:89] "kube-proxy-75znn" [c6f7e4f1-ccc0-40c5-b449-fb42e743f373] Running
	I1018 18:25:16.661040  211246 system_pods.go:89] "kube-scheduler-no-preload-729957" [fa436526-c2f9-43b9-a48e-57dc63916082] Running
	I1018 18:25:16.661046  211246 system_pods.go:89] "storage-provisioner" [4bef6a17-c67c-4394-837e-c20c6378a6ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 18:25:16.661059  211246 retry.go:31] will retry after 269.256949ms: missing components: kube-dns
	I1018 18:25:16.936209  211246 system_pods.go:86] 8 kube-system pods found
	I1018 18:25:16.936239  211246 system_pods.go:89] "coredns-66bc5c9577-q7mng" [365b51ac-c2aa-4247-a37e-ef5ce5d54a36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:25:16.936248  211246 system_pods.go:89] "etcd-no-preload-729957" [29023f58-84ea-44ad-b6e8-cc5cf720a4be] Running
	I1018 18:25:16.936255  211246 system_pods.go:89] "kindnet-4hbt7" [6c9fa05f-7c37-442d-b3fa-ee037c865d3e] Running
	I1018 18:25:16.936260  211246 system_pods.go:89] "kube-apiserver-no-preload-729957" [ea721a8e-b407-4422-b1c1-dc40032787ee] Running
	I1018 18:25:16.936265  211246 system_pods.go:89] "kube-controller-manager-no-preload-729957" [bf889e9e-777e-403a-b4ef-3582a86bafbb] Running
	I1018 18:25:16.936269  211246 system_pods.go:89] "kube-proxy-75znn" [c6f7e4f1-ccc0-40c5-b449-fb42e743f373] Running
	I1018 18:25:16.936273  211246 system_pods.go:89] "kube-scheduler-no-preload-729957" [fa436526-c2f9-43b9-a48e-57dc63916082] Running
	I1018 18:25:16.936279  211246 system_pods.go:89] "storage-provisioner" [4bef6a17-c67c-4394-837e-c20c6378a6ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 18:25:16.936293  211246 retry.go:31] will retry after 380.700224ms: missing components: kube-dns
	I1018 18:25:17.325267  211246 system_pods.go:86] 8 kube-system pods found
	I1018 18:25:17.325309  211246 system_pods.go:89] "coredns-66bc5c9577-q7mng" [365b51ac-c2aa-4247-a37e-ef5ce5d54a36] Running
	I1018 18:25:17.325317  211246 system_pods.go:89] "etcd-no-preload-729957" [29023f58-84ea-44ad-b6e8-cc5cf720a4be] Running
	I1018 18:25:17.325322  211246 system_pods.go:89] "kindnet-4hbt7" [6c9fa05f-7c37-442d-b3fa-ee037c865d3e] Running
	I1018 18:25:17.325326  211246 system_pods.go:89] "kube-apiserver-no-preload-729957" [ea721a8e-b407-4422-b1c1-dc40032787ee] Running
	I1018 18:25:17.325331  211246 system_pods.go:89] "kube-controller-manager-no-preload-729957" [bf889e9e-777e-403a-b4ef-3582a86bafbb] Running
	I1018 18:25:17.325335  211246 system_pods.go:89] "kube-proxy-75znn" [c6f7e4f1-ccc0-40c5-b449-fb42e743f373] Running
	I1018 18:25:17.325343  211246 system_pods.go:89] "kube-scheduler-no-preload-729957" [fa436526-c2f9-43b9-a48e-57dc63916082] Running
	I1018 18:25:17.325347  211246 system_pods.go:89] "storage-provisioner" [4bef6a17-c67c-4394-837e-c20c6378a6ed] Running
	I1018 18:25:17.325356  211246 system_pods.go:126] duration metric: took 984.088997ms to wait for k8s-apps to be running ...
	I1018 18:25:17.325366  211246 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 18:25:17.325444  211246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:25:17.351014  211246 system_svc.go:56] duration metric: took 25.635078ms WaitForService to wait for kubelet
	I1018 18:25:17.351055  211246 kubeadm.go:586] duration metric: took 17.851701666s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:25:17.351075  211246 node_conditions.go:102] verifying NodePressure condition ...
	I1018 18:25:17.356618  211246 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 18:25:17.356692  211246 node_conditions.go:123] node cpu capacity is 2
	I1018 18:25:17.356711  211246 node_conditions.go:105] duration metric: took 5.630087ms to run NodePressure ...
	I1018 18:25:17.356723  211246 start.go:241] waiting for startup goroutines ...
	I1018 18:25:17.356734  211246 start.go:246] waiting for cluster config update ...
	I1018 18:25:17.356745  211246 start.go:255] writing updated cluster config ...
	I1018 18:25:17.357185  211246 ssh_runner.go:195] Run: rm -f paused
	I1018 18:25:17.363349  211246 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:25:17.369122  211246 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q7mng" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:25:17.377540  211246 pod_ready.go:94] pod "coredns-66bc5c9577-q7mng" is "Ready"
	I1018 18:25:17.377646  211246 pod_ready.go:86] duration metric: took 8.417914ms for pod "coredns-66bc5c9577-q7mng" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:25:17.381935  211246 pod_ready.go:83] waiting for pod "etcd-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:25:17.389829  211246 pod_ready.go:94] pod "etcd-no-preload-729957" is "Ready"
	I1018 18:25:17.389930  211246 pod_ready.go:86] duration metric: took 7.909328ms for pod "etcd-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:25:17.394056  211246 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:25:17.402059  211246 pod_ready.go:94] pod "kube-apiserver-no-preload-729957" is "Ready"
	I1018 18:25:17.402146  211246 pod_ready.go:86] duration metric: took 8.008439ms for pod "kube-apiserver-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:25:17.407472  211246 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:25:17.767787  211246 pod_ready.go:94] pod "kube-controller-manager-no-preload-729957" is "Ready"
	I1018 18:25:17.767816  211246 pod_ready.go:86] duration metric: took 360.263833ms for pod "kube-controller-manager-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:25:17.967780  211246 pod_ready.go:83] waiting for pod "kube-proxy-75znn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:25:18.368094  211246 pod_ready.go:94] pod "kube-proxy-75znn" is "Ready"
	I1018 18:25:18.368168  211246 pod_ready.go:86] duration metric: took 400.362877ms for pod "kube-proxy-75znn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:25:18.568791  211246 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:25:18.967573  211246 pod_ready.go:94] pod "kube-scheduler-no-preload-729957" is "Ready"
	I1018 18:25:18.967602  211246 pod_ready.go:86] duration metric: took 398.736296ms for pod "kube-scheduler-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:25:18.967618  211246 pod_ready.go:40] duration metric: took 1.604161571s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:25:19.023503  211246 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 18:25:19.028722  211246 out.go:179] * Done! kubectl is now configured to use "no-preload-729957" cluster and "default" namespace by default
	I1018 18:25:17.795223  215342 out.go:252]   - Configuring RBAC rules ...
	I1018 18:25:17.795362  215342 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 18:25:17.808247  215342 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 18:25:17.817703  215342 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 18:25:17.822423  215342 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 18:25:17.826737  215342 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 18:25:17.833546  215342 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 18:25:18.136055  215342 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 18:25:18.590231  215342 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 18:25:19.138279  215342 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 18:25:19.139753  215342 kubeadm.go:318] 
	I1018 18:25:19.139872  215342 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 18:25:19.139921  215342 kubeadm.go:318] 
	I1018 18:25:19.140018  215342 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 18:25:19.140024  215342 kubeadm.go:318] 
	I1018 18:25:19.140051  215342 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 18:25:19.140330  215342 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 18:25:19.140408  215342 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 18:25:19.140420  215342 kubeadm.go:318] 
	I1018 18:25:19.140506  215342 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 18:25:19.140515  215342 kubeadm.go:318] 
	I1018 18:25:19.140583  215342 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 18:25:19.140597  215342 kubeadm.go:318] 
	I1018 18:25:19.140682  215342 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 18:25:19.140885  215342 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 18:25:19.141065  215342 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 18:25:19.141076  215342 kubeadm.go:318] 
	I1018 18:25:19.141224  215342 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 18:25:19.141319  215342 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 18:25:19.141326  215342 kubeadm.go:318] 
	I1018 18:25:19.141414  215342 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 5rfsqk.2p7sczsef9jhpde8 \
	I1018 18:25:19.141522  215342 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d0244c5bf86cdf97546c6a22045cb6ed9d7ead524d9c98d9ca35da77d5d7a04d \
	I1018 18:25:19.141544  215342 kubeadm.go:318] 	--control-plane 
	I1018 18:25:19.141548  215342 kubeadm.go:318] 
	I1018 18:25:19.141638  215342 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 18:25:19.141642  215342 kubeadm.go:318] 
	I1018 18:25:19.141728  215342 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 5rfsqk.2p7sczsef9jhpde8 \
	I1018 18:25:19.142092  215342 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d0244c5bf86cdf97546c6a22045cb6ed9d7ead524d9c98d9ca35da77d5d7a04d 
	I1018 18:25:19.148616  215342 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 18:25:19.148883  215342 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 18:25:19.149048  215342 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 18:25:19.150763  215342 cni.go:84] Creating CNI manager for ""
	I1018 18:25:19.150782  215342 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:25:19.156284  215342 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 18:25:19.159596  215342 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 18:25:19.170018  215342 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 18:25:19.170045  215342 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 18:25:19.187275  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 18:25:19.529975  215342 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 18:25:19.530128  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:25:19.530232  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-530891 minikube.k8s.io/updated_at=2025_10_18T18_25_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404 minikube.k8s.io/name=newest-cni-530891 minikube.k8s.io/primary=true
	I1018 18:25:19.546403  215342 ops.go:34] apiserver oom_adj: -16
	I1018 18:25:19.716145  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:25:20.216259  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:25:20.716283  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:25:21.216211  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:25:21.716673  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:25:22.216245  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:25:22.716412  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:25:23.217127  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:25:23.717125  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:25:24.217153  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:25:24.384312  215342 kubeadm.go:1113] duration metric: took 4.854232202s to wait for elevateKubeSystemPrivileges
	I1018 18:25:24.384345  215342 kubeadm.go:402] duration metric: took 23.130230042s to StartCluster
	I1018 18:25:24.384362  215342 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:25:24.384435  215342 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:25:24.385461  215342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:25:24.385698  215342 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 18:25:24.385706  215342 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 18:25:24.385959  215342 config.go:182] Loaded profile config "newest-cni-530891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:25:24.386001  215342 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 18:25:24.386071  215342 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-530891"
	I1018 18:25:24.386092  215342 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-530891"
	I1018 18:25:24.386118  215342 host.go:66] Checking if "newest-cni-530891" exists ...
	I1018 18:25:24.386565  215342 cli_runner.go:164] Run: docker container inspect newest-cni-530891 --format={{.State.Status}}
	I1018 18:25:24.387029  215342 addons.go:69] Setting default-storageclass=true in profile "newest-cni-530891"
	I1018 18:25:24.387054  215342 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-530891"
	I1018 18:25:24.387312  215342 cli_runner.go:164] Run: docker container inspect newest-cni-530891 --format={{.State.Status}}
	I1018 18:25:24.388985  215342 out.go:179] * Verifying Kubernetes components...
	I1018 18:25:24.393047  215342 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:25:24.434895  215342 addons.go:238] Setting addon default-storageclass=true in "newest-cni-530891"
	I1018 18:25:24.434933  215342 host.go:66] Checking if "newest-cni-530891" exists ...
	I1018 18:25:24.435350  215342 cli_runner.go:164] Run: docker container inspect newest-cni-530891 --format={{.State.Status}}
	I1018 18:25:24.435607  215342 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:25:24.438628  215342 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:25:24.438656  215342 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 18:25:24.438731  215342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-530891
	I1018 18:25:24.472035  215342 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 18:25:24.472055  215342 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 18:25:24.472117  215342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-530891
	I1018 18:25:24.497060  215342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/newest-cni-530891/id_rsa Username:docker}
	I1018 18:25:24.505708  215342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/newest-cni-530891/id_rsa Username:docker}
	I1018 18:25:24.786614  215342 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:25:24.789277  215342 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 18:25:24.789355  215342 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:25:24.836542  215342 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 18:25:25.539322  215342 api_server.go:52] waiting for apiserver process to appear ...
	I1018 18:25:25.539422  215342 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1018 18:25:25.540991  215342 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 18:25:25.568738  215342 api_server.go:72] duration metric: took 1.183004264s to wait for apiserver process to appear ...
	I1018 18:25:25.568758  215342 api_server.go:88] waiting for apiserver healthz status ...
	I1018 18:25:25.568777  215342 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 18:25:25.586653  215342 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 18:25:25.589312  215342 api_server.go:141] control plane version: v1.34.1
	I1018 18:25:25.589336  215342 api_server.go:131] duration metric: took 20.571284ms to wait for apiserver health ...
	I1018 18:25:25.589345  215342 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 18:25:25.589592  215342 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 18:25:25.592763  215342 addons.go:514] duration metric: took 1.206752401s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 18:25:25.594678  215342 system_pods.go:59] 9 kube-system pods found
	I1018 18:25:25.594746  215342 system_pods.go:61] "coredns-66bc5c9577-brzb4" [762df58f-b70f-479e-b130-07c24a8f3f51] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 18:25:25.594769  215342 system_pods.go:61] "coredns-66bc5c9577-zc9k5" [45c70073-29fd-4d82-9dfb-d20628d4a3de] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 18:25:25.594796  215342 system_pods.go:61] "etcd-newest-cni-530891" [1cf783e9-928f-47f5-be9d-4df2479e9b31] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 18:25:25.594826  215342 system_pods.go:61] "kindnet-497z4" [99e6305c-fb9e-4f10-9746-3dfdd03c570a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 18:25:25.594852  215342 system_pods.go:61] "kube-apiserver-newest-cni-530891" [b43d0e4b-98c3-4c5e-96dc-4ab8c7913e63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 18:25:25.594868  215342 system_pods.go:61] "kube-controller-manager-newest-cni-530891" [0d687e21-ef2f-4a67-94ea-d40750239b57] Running
	I1018 18:25:25.594896  215342 system_pods.go:61] "kube-proxy-k8ljb" [2f4233c2-bc5d-452a-84e3-875564801a54] Pending
	I1018 18:25:25.594917  215342 system_pods.go:61] "kube-scheduler-newest-cni-530891" [a81c1ce2-edc2-4f88-aebd-d06916133c38] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 18:25:25.594943  215342 system_pods.go:61] "storage-provisioner" [b2348e9f-6e43-4f09-a0c0-01ab697d968a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 18:25:25.594972  215342 system_pods.go:74] duration metric: took 5.62029ms to wait for pod list to return data ...
	I1018 18:25:25.594993  215342 default_sa.go:34] waiting for default service account to be created ...
	I1018 18:25:25.605830  215342 default_sa.go:45] found service account: "default"
	I1018 18:25:25.605903  215342 default_sa.go:55] duration metric: took 10.891118ms for default service account to be created ...
	I1018 18:25:25.605930  215342 kubeadm.go:586] duration metric: took 1.220200451s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 18:25:25.605983  215342 node_conditions.go:102] verifying NodePressure condition ...
	I1018 18:25:25.618693  215342 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 18:25:25.618774  215342 node_conditions.go:123] node cpu capacity is 2
	I1018 18:25:25.618812  215342 node_conditions.go:105] duration metric: took 12.80343ms to run NodePressure ...
	I1018 18:25:25.618836  215342 start.go:241] waiting for startup goroutines ...
	I1018 18:25:26.043788  215342 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-530891" context rescaled to 1 replicas
	I1018 18:25:26.043836  215342 start.go:246] waiting for cluster config update ...
	I1018 18:25:26.043869  215342 start.go:255] writing updated cluster config ...
	I1018 18:25:26.044199  215342 ssh_runner.go:195] Run: rm -f paused
	I1018 18:25:26.110679  215342 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 18:25:26.113893  215342 out.go:179] * Done! kubectl is now configured to use "newest-cni-530891" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.462754695Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.4673689Z" level=info msg="Running pod sandbox: kube-system/kindnet-497z4/POD" id=ad68dab0-cedc-4d54-a347-175888129694 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.467510949Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.473247179Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=adbd4165-7752-430e-a49a-75d26b3bc0b1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.47687692Z" level=info msg="Ran pod sandbox a185f71a6daa5599cc3785f50917a3dabcc26d1547b95b1a48536e591bed939c with infra container: kube-system/kube-proxy-k8ljb/POD" id=adbd4165-7752-430e-a49a-75d26b3bc0b1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.478206718Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=f1829f6a-6007-4a46-9f87-3dcd1f556f7d name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.483079609Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=cb0bd628-eea9-4e8b-8b41-990196f3af98 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.485731459Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ad68dab0-cedc-4d54-a347-175888129694 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.492800678Z" level=info msg="Creating container: kube-system/kube-proxy-k8ljb/kube-proxy" id=2ccbebee-39bc-4e23-9d84-fd8448130da7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.493228835Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.495897374Z" level=info msg="Ran pod sandbox 22d5544eea2b0473976e92ce69cc97e9607656964c0c0a157fc7c60fdefc3dae with infra container: kube-system/kindnet-497z4/POD" id=ad68dab0-cedc-4d54-a347-175888129694 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.497169202Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=41f6b4b1-0e25-462d-b93f-b29203142919 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.504576714Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=4eeb9002-0379-42eb-8e4d-ac9c83645898 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.505194117Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.506557351Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.510770673Z" level=info msg="Creating container: kube-system/kindnet-497z4/kindnet-cni" id=bf38c501-b523-4c7e-8793-cedec1975104 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.511092006Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.516410516Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.517480011Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.552902686Z" level=info msg="Created container 51f3dc8f64dbdf003b80c7fd0efa9b6cdbc6b794d7cf2f0e489cb8639c5b275e: kube-system/kindnet-497z4/kindnet-cni" id=bf38c501-b523-4c7e-8793-cedec1975104 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.565164029Z" level=info msg="Starting container: 51f3dc8f64dbdf003b80c7fd0efa9b6cdbc6b794d7cf2f0e489cb8639c5b275e" id=7cd80034-9095-4f44-8ce9-adce17ed46ae name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.569705437Z" level=info msg="Created container e7727873384632318a2c3ca0608366a731398d8660939400b467d1fa5e1f1438: kube-system/kube-proxy-k8ljb/kube-proxy" id=2ccbebee-39bc-4e23-9d84-fd8448130da7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.579438797Z" level=info msg="Started container" PID=1519 containerID=51f3dc8f64dbdf003b80c7fd0efa9b6cdbc6b794d7cf2f0e489cb8639c5b275e description=kube-system/kindnet-497z4/kindnet-cni id=7cd80034-9095-4f44-8ce9-adce17ed46ae name=/runtime.v1.RuntimeService/StartContainer sandboxID=22d5544eea2b0473976e92ce69cc97e9607656964c0c0a157fc7c60fdefc3dae
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.579533092Z" level=info msg="Starting container: e7727873384632318a2c3ca0608366a731398d8660939400b467d1fa5e1f1438" id=210635f6-9ff3-4d2f-9d3e-ac94575d9322 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:25:26 newest-cni-530891 crio[837]: time="2025-10-18T18:25:26.599042438Z" level=info msg="Started container" PID=1517 containerID=e7727873384632318a2c3ca0608366a731398d8660939400b467d1fa5e1f1438 description=kube-system/kube-proxy-k8ljb/kube-proxy id=210635f6-9ff3-4d2f-9d3e-ac94575d9322 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a185f71a6daa5599cc3785f50917a3dabcc26d1547b95b1a48536e591bed939c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	51f3dc8f64dbd       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   1 second ago        Running             kindnet-cni               0                   22d5544eea2b0       kindnet-497z4                               kube-system
	e772787338463       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   1 second ago        Running             kube-proxy                0                   a185f71a6daa5       kube-proxy-k8ljb                            kube-system
	b10aa4467af0d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      0                   61f059e7ea838       etcd-newest-cni-530891                      kube-system
	a7a20e263584b       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            0                   e7510409ef59e       kube-apiserver-newest-cni-530891            kube-system
	2cf62364f1382       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   0                   38c9875bf6ffa       kube-controller-manager-newest-cni-530891   kube-system
	0a2d0704d82fc       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago      Running             kube-scheduler            0                   c5175709e3b94       kube-scheduler-newest-cni-530891            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-530891
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-530891
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=newest-cni-530891
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T18_25_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 18:25:15 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-530891
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 18:25:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 18:25:18 +0000   Sat, 18 Oct 2025 18:25:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 18:25:18 +0000   Sat, 18 Oct 2025 18:25:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 18:25:18 +0000   Sat, 18 Oct 2025 18:25:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 18:25:18 +0000   Sat, 18 Oct 2025 18:25:11 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-530891
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                4a5cb23d-033c-4f7d-ae76-6a54d50540e5
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-530891                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-497z4                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-530891             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-530891    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 kube-proxy-k8ljb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-530891             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 0s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  16s (x8 over 16s)  kubelet          Node newest-cni-530891 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16s (x8 over 16s)  kubelet          Node newest-cni-530891 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16s (x8 over 16s)  kubelet          Node newest-cni-530891 status is now: NodeHasSufficientPID
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9s                 kubelet          Node newest-cni-530891 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s                 kubelet          Node newest-cni-530891 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s                 kubelet          Node newest-cni-530891 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-530891 event: Registered Node newest-cni-530891 in Controller
	
	
	==> dmesg <==
	[ +24.403909] overlayfs: idmapped layers are currently not supported
	[  +6.162774] overlayfs: idmapped layers are currently not supported
	[Oct18 18:05] overlayfs: idmapped layers are currently not supported
	[ +25.128760] overlayfs: idmapped layers are currently not supported
	[Oct18 18:06] overlayfs: idmapped layers are currently not supported
	[Oct18 18:07] overlayfs: idmapped layers are currently not supported
	[Oct18 18:08] overlayfs: idmapped layers are currently not supported
	[Oct18 18:09] overlayfs: idmapped layers are currently not supported
	[Oct18 18:11] overlayfs: idmapped layers are currently not supported
	[Oct18 18:13] overlayfs: idmapped layers are currently not supported
	[ +30.969240] overlayfs: idmapped layers are currently not supported
	[Oct18 18:15] overlayfs: idmapped layers are currently not supported
	[Oct18 18:16] overlayfs: idmapped layers are currently not supported
	[Oct18 18:17] overlayfs: idmapped layers are currently not supported
	[ +23.167826] overlayfs: idmapped layers are currently not supported
	[Oct18 18:18] overlayfs: idmapped layers are currently not supported
	[ +38.509809] overlayfs: idmapped layers are currently not supported
	[Oct18 18:19] overlayfs: idmapped layers are currently not supported
	[Oct18 18:21] overlayfs: idmapped layers are currently not supported
	[Oct18 18:22] overlayfs: idmapped layers are currently not supported
	[Oct18 18:23] overlayfs: idmapped layers are currently not supported
	[ +30.822562] overlayfs: idmapped layers are currently not supported
	[Oct18 18:24] bpfilter: read fail -512
	[ +10.607871] overlayfs: idmapped layers are currently not supported
	[Oct18 18:25] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b10aa4467af0d9a55aaec60219333eb670afb0856887d00b11a73b3e7a975388] <==
	{"level":"warn","ts":"2025-10-18T18:25:13.787162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:13.840669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:13.870196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:13.894046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:13.912568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:13.928259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:13.946766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:13.977321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:13.989378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:14.009025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:14.033854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:14.048664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:14.062471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:14.081497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:14.102743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:14.117899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:14.136216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:14.153629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:14.181051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:14.192391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:14.215743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:14.243853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:14.258661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:14.285868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:14.388101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33722","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:25:27 up  2:07,  0 user,  load average: 4.22, 3.36, 2.90
	Linux newest-cni-530891 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [51f3dc8f64dbdf003b80c7fd0efa9b6cdbc6b794d7cf2f0e489cb8639c5b275e] <==
	I1018 18:25:26.708012       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 18:25:26.708434       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 18:25:26.708655       1 main.go:148] setting mtu 1500 for CNI 
	I1018 18:25:26.708699       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 18:25:26.708748       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T18:25:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 18:25:26.919132       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 18:25:26.919199       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 18:25:26.919258       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 18:25:26.919687       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [a7a20e263584bd4ee83c8e70540e1e5e5d698e752c8c46c6c68f0d86cf493729] <==
	E1018 18:25:15.346739       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	E1018 18:25:15.347636       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1018 18:25:15.389318       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 18:25:15.421238       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 18:25:15.425399       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 18:25:15.456173       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 18:25:15.466148       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 18:25:15.557314       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 18:25:16.052693       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 18:25:16.063980       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 18:25:16.064077       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 18:25:17.221370       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 18:25:17.295249       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 18:25:17.403514       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 18:25:17.428123       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1018 18:25:17.429656       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 18:25:17.435772       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 18:25:18.213297       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 18:25:18.551034       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 18:25:18.586019       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 18:25:18.609104       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 18:25:24.166479       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 18:25:24.270698       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1018 18:25:24.380772       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 18:25:24.590950       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [2cf62364f1382c23e15d15727147d195f14ef92feb1cad78a82b4908c8ab9cc1] <==
	I1018 18:25:23.216053       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 18:25:23.217357       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 18:25:23.217460       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 18:25:23.217543       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 18:25:23.217613       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 18:25:23.217655       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 18:25:23.223101       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 18:25:23.223138       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 18:25:23.223470       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 18:25:23.223636       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 18:25:23.223039       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 18:25:23.231779       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 18:25:23.239393       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 18:25:23.245890       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-530891" podCIDRs=["10.42.0.0/24"]
	I1018 18:25:23.257135       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 18:25:23.258721       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 18:25:23.260147       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 18:25:23.260384       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 18:25:23.260571       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 18:25:23.260732       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 18:25:23.261788       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 18:25:23.262166       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 18:25:23.262192       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 18:25:23.268903       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 18:25:23.271073       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-proxy [e7727873384632318a2c3ca0608366a731398d8660939400b467d1fa5e1f1438] <==
	I1018 18:25:26.659352       1 server_linux.go:53] "Using iptables proxy"
	I1018 18:25:26.770928       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 18:25:26.873350       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 18:25:26.873399       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 18:25:26.873486       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 18:25:26.899360       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 18:25:26.899485       1 server_linux.go:132] "Using iptables Proxier"
	I1018 18:25:26.909277       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 18:25:26.910201       1 server.go:527] "Version info" version="v1.34.1"
	I1018 18:25:26.910366       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:25:26.928320       1 config.go:106] "Starting endpoint slice config controller"
	I1018 18:25:26.928421       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 18:25:26.928465       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 18:25:26.928483       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 18:25:26.928863       1 config.go:200] "Starting service config controller"
	I1018 18:25:26.928915       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 18:25:26.929084       1 config.go:309] "Starting node config controller"
	I1018 18:25:26.929132       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 18:25:27.029552       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 18:25:27.029650       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 18:25:27.029690       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 18:25:27.029722       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [0a2d0704d82fcfc74ee0afec968366a21ba454b8cb2419186502922f8de1398f] <==
	I1018 18:25:14.343477       1 serving.go:386] Generated self-signed cert in-memory
	I1018 18:25:17.461708       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 18:25:17.461741       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:25:17.467504       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 18:25:17.467632       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 18:25:17.467654       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 18:25:17.467681       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 18:25:17.471220       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:25:17.480580       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:25:17.477449       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:25:17.480727       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:25:17.567794       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 18:25:17.581367       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:25:17.581371       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 18 18:25:19 newest-cni-530891 kubelet[1295]: I1018 18:25:19.806855    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-530891" podStartSLOduration=3.80682767 podStartE2EDuration="3.80682767s" podCreationTimestamp="2025-10-18 18:25:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 18:25:19.78814912 +0000 UTC m=+1.365303863" watchObservedRunningTime="2025-10-18 18:25:19.80682767 +0000 UTC m=+1.383982405"
	Oct 18 18:25:23 newest-cni-530891 kubelet[1295]: I1018 18:25:23.285826    1295 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 18 18:25:23 newest-cni-530891 kubelet[1295]: I1018 18:25:23.286863    1295 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 18 18:25:24 newest-cni-530891 kubelet[1295]: E1018 18:25:24.372734    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-k8ljb\" is forbidden: User \"system:node:newest-cni-530891\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-530891' and this object" podUID="2f4233c2-bc5d-452a-84e3-875564801a54" pod="kube-system/kube-proxy-k8ljb"
	Oct 18 18:25:24 newest-cni-530891 kubelet[1295]: E1018 18:25:24.373752    1295 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:newest-cni-530891\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-530891' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 18 18:25:24 newest-cni-530891 kubelet[1295]: E1018 18:25:24.381378    1295 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:newest-cni-530891\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-530891' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 18 18:25:24 newest-cni-530891 kubelet[1295]: E1018 18:25:24.394916    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-k8ljb\" is forbidden: User \"system:node:newest-cni-530891\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-530891' and this object" podUID="2f4233c2-bc5d-452a-84e3-875564801a54" pod="kube-system/kube-proxy-k8ljb"
	Oct 18 18:25:24 newest-cni-530891 kubelet[1295]: I1018 18:25:24.451674    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99e6305c-fb9e-4f10-9746-3dfdd03c570a-xtables-lock\") pod \"kindnet-497z4\" (UID: \"99e6305c-fb9e-4f10-9746-3dfdd03c570a\") " pod="kube-system/kindnet-497z4"
	Oct 18 18:25:24 newest-cni-530891 kubelet[1295]: I1018 18:25:24.451732    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f4233c2-bc5d-452a-84e3-875564801a54-xtables-lock\") pod \"kube-proxy-k8ljb\" (UID: \"2f4233c2-bc5d-452a-84e3-875564801a54\") " pod="kube-system/kube-proxy-k8ljb"
	Oct 18 18:25:24 newest-cni-530891 kubelet[1295]: I1018 18:25:24.451766    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjwjf\" (UniqueName: \"kubernetes.io/projected/2f4233c2-bc5d-452a-84e3-875564801a54-kube-api-access-gjwjf\") pod \"kube-proxy-k8ljb\" (UID: \"2f4233c2-bc5d-452a-84e3-875564801a54\") " pod="kube-system/kube-proxy-k8ljb"
	Oct 18 18:25:24 newest-cni-530891 kubelet[1295]: I1018 18:25:24.451787    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tz65\" (UniqueName: \"kubernetes.io/projected/99e6305c-fb9e-4f10-9746-3dfdd03c570a-kube-api-access-6tz65\") pod \"kindnet-497z4\" (UID: \"99e6305c-fb9e-4f10-9746-3dfdd03c570a\") " pod="kube-system/kindnet-497z4"
	Oct 18 18:25:24 newest-cni-530891 kubelet[1295]: I1018 18:25:24.451808    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/99e6305c-fb9e-4f10-9746-3dfdd03c570a-cni-cfg\") pod \"kindnet-497z4\" (UID: \"99e6305c-fb9e-4f10-9746-3dfdd03c570a\") " pod="kube-system/kindnet-497z4"
	Oct 18 18:25:24 newest-cni-530891 kubelet[1295]: I1018 18:25:24.451825    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2f4233c2-bc5d-452a-84e3-875564801a54-kube-proxy\") pod \"kube-proxy-k8ljb\" (UID: \"2f4233c2-bc5d-452a-84e3-875564801a54\") " pod="kube-system/kube-proxy-k8ljb"
	Oct 18 18:25:24 newest-cni-530891 kubelet[1295]: I1018 18:25:24.451843    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f4233c2-bc5d-452a-84e3-875564801a54-lib-modules\") pod \"kube-proxy-k8ljb\" (UID: \"2f4233c2-bc5d-452a-84e3-875564801a54\") " pod="kube-system/kube-proxy-k8ljb"
	Oct 18 18:25:24 newest-cni-530891 kubelet[1295]: I1018 18:25:24.451858    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99e6305c-fb9e-4f10-9746-3dfdd03c570a-lib-modules\") pod \"kindnet-497z4\" (UID: \"99e6305c-fb9e-4f10-9746-3dfdd03c570a\") " pod="kube-system/kindnet-497z4"
	Oct 18 18:25:25 newest-cni-530891 kubelet[1295]: E1018 18:25:25.709302    1295 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 18 18:25:25 newest-cni-530891 kubelet[1295]: E1018 18:25:25.709352    1295 projected.go:196] Error preparing data for projected volume kube-api-access-6tz65 for pod kube-system/kindnet-497z4: failed to sync configmap cache: timed out waiting for the condition
	Oct 18 18:25:25 newest-cni-530891 kubelet[1295]: E1018 18:25:25.709462    1295 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99e6305c-fb9e-4f10-9746-3dfdd03c570a-kube-api-access-6tz65 podName:99e6305c-fb9e-4f10-9746-3dfdd03c570a nodeName:}" failed. No retries permitted until 2025-10-18 18:25:26.209434794 +0000 UTC m=+7.786589520 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6tz65" (UniqueName: "kubernetes.io/projected/99e6305c-fb9e-4f10-9746-3dfdd03c570a-kube-api-access-6tz65") pod "kindnet-497z4" (UID: "99e6305c-fb9e-4f10-9746-3dfdd03c570a") : failed to sync configmap cache: timed out waiting for the condition
	Oct 18 18:25:25 newest-cni-530891 kubelet[1295]: E1018 18:25:25.760299    1295 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 18 18:25:25 newest-cni-530891 kubelet[1295]: E1018 18:25:25.760348    1295 projected.go:196] Error preparing data for projected volume kube-api-access-gjwjf for pod kube-system/kube-proxy-k8ljb: failed to sync configmap cache: timed out waiting for the condition
	Oct 18 18:25:25 newest-cni-530891 kubelet[1295]: E1018 18:25:25.760422    1295 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2f4233c2-bc5d-452a-84e3-875564801a54-kube-api-access-gjwjf podName:2f4233c2-bc5d-452a-84e3-875564801a54 nodeName:}" failed. No retries permitted until 2025-10-18 18:25:26.260399683 +0000 UTC m=+7.837554418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gjwjf" (UniqueName: "kubernetes.io/projected/2f4233c2-bc5d-452a-84e3-875564801a54-kube-api-access-gjwjf") pod "kube-proxy-k8ljb" (UID: "2f4233c2-bc5d-452a-84e3-875564801a54") : failed to sync configmap cache: timed out waiting for the condition
	Oct 18 18:25:26 newest-cni-530891 kubelet[1295]: I1018 18:25:26.280480    1295 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 18:25:26 newest-cni-530891 kubelet[1295]: W1018 18:25:26.488773    1295 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e/crio-22d5544eea2b0473976e92ce69cc97e9607656964c0c0a157fc7c60fdefc3dae WatchSource:0}: Error finding container 22d5544eea2b0473976e92ce69cc97e9607656964c0c0a157fc7c60fdefc3dae: Status 404 returned error can't find the container with id 22d5544eea2b0473976e92ce69cc97e9607656964c0c0a157fc7c60fdefc3dae
	Oct 18 18:25:26 newest-cni-530891 kubelet[1295]: I1018 18:25:26.756066    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-497z4" podStartSLOduration=2.756046755 podStartE2EDuration="2.756046755s" podCreationTimestamp="2025-10-18 18:25:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 18:25:26.755336461 +0000 UTC m=+8.332491204" watchObservedRunningTime="2025-10-18 18:25:26.756046755 +0000 UTC m=+8.333201490"
	Oct 18 18:25:26 newest-cni-530891 kubelet[1295]: I1018 18:25:26.795788    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k8ljb" podStartSLOduration=2.795768136 podStartE2EDuration="2.795768136s" podCreationTimestamp="2025-10-18 18:25:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 18:25:26.794100775 +0000 UTC m=+8.371255518" watchObservedRunningTime="2025-10-18 18:25:26.795768136 +0000 UTC m=+8.372922863"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-530891 -n newest-cni-530891
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-530891 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-brzb4 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-530891 describe pod coredns-66bc5c9577-brzb4 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-530891 describe pod coredns-66bc5c9577-brzb4 storage-provisioner: exit status 1 (123.362736ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-brzb4" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-530891 describe pod coredns-66bc5c9577-brzb4 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-729957 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-729957 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (368.989723ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:25:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-729957 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-729957 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-729957 describe deploy/metrics-server -n kube-system: exit status 1 (95.398519ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-729957 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-729957
helpers_test.go:243: (dbg) docker inspect no-preload-729957:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673",
	        "Created": "2025-10-18T18:24:12.31875014Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 211555,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T18:24:12.39490407Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673/hostname",
	        "HostsPath": "/var/lib/docker/containers/26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673/hosts",
	        "LogPath": "/var/lib/docker/containers/26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673/26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673-json.log",
	        "Name": "/no-preload-729957",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-729957:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-729957",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673",
	                "LowerDir": "/var/lib/docker/overlay2/23e3b3ca1f79e937b59a52dcaa595b90f6276c9c388c3cfb57d1e199b659f3cd-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/23e3b3ca1f79e937b59a52dcaa595b90f6276c9c388c3cfb57d1e199b659f3cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/23e3b3ca1f79e937b59a52dcaa595b90f6276c9c388c3cfb57d1e199b659f3cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/23e3b3ca1f79e937b59a52dcaa595b90f6276c9c388c3cfb57d1e199b659f3cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-729957",
	                "Source": "/var/lib/docker/volumes/no-preload-729957/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-729957",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-729957",
	                "name.minikube.sigs.k8s.io": "no-preload-729957",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "76651bd04153d29f00b6562f29845af702f74766a88efdcbe0c74e4fc729f518",
	            "SandboxKey": "/var/run/docker/netns/76651bd04153",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-729957": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:b0:c0:d5:bd:44",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9171cfee9247515a7d76872523f6d046330152cbb9ee1a62de7b40aaab7a7a81",
	                    "EndpointID": "a7282b495214e1bb98ea3713857f39d74ec4950c894c05797d627c60754c7cba",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-729957",
	                        "26cea4068f8d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-729957 -n no-preload-729957
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-729957 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-729957 logs -n 25: (1.475387014s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-192562 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:22 UTC │
	│ delete  │ -p cert-expiration-463770                                                                                                                                                                                                                     │ cert-expiration-463770       │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:21 UTC │
	│ start   │ -p embed-certs-213943 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:21 UTC │ 18 Oct 25 18:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-192562 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-192562 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │ 18 Oct 25 18:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-192562 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │ 18 Oct 25 18:22 UTC │
	│ start   │ -p default-k8s-diff-port-192562 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:22 UTC │ 18 Oct 25 18:23 UTC │
	│ addons  │ enable metrics-server -p embed-certs-213943 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │                     │
	│ stop    │ -p embed-certs-213943 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │ 18 Oct 25 18:23 UTC │
	│ addons  │ enable dashboard -p embed-certs-213943 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │ 18 Oct 25 18:23 UTC │
	│ start   │ -p embed-certs-213943 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │ 18 Oct 25 18:24 UTC │
	│ image   │ default-k8s-diff-port-192562 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ pause   │ -p default-k8s-diff-port-192562 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-192562                                                                                                                                                                                                               │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ delete  │ -p default-k8s-diff-port-192562                                                                                                                                                                                                               │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ delete  │ -p disable-driver-mounts-747178                                                                                                                                                                                                               │ disable-driver-mounts-747178 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ start   │ -p no-preload-729957 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:25 UTC │
	│ image   │ embed-certs-213943 image list --format=json                                                                                                                                                                                                   │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ pause   │ -p embed-certs-213943 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │                     │
	│ delete  │ -p embed-certs-213943                                                                                                                                                                                                                         │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ delete  │ -p embed-certs-213943                                                                                                                                                                                                                         │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ start   │ -p newest-cni-530891 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:25 UTC │
	│ addons  │ enable metrics-server -p newest-cni-530891 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-729957 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │                     │
	│ stop    │ -p newest-cni-530891 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 18:24:45
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 18:24:45.932875  215342 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:24:45.933083  215342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:24:45.933106  215342 out.go:374] Setting ErrFile to fd 2...
	I1018 18:24:45.933125  215342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:24:45.933400  215342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:24:45.933823  215342 out.go:368] Setting JSON to false
	I1018 18:24:45.934723  215342 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7635,"bootTime":1760804251,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 18:24:45.934807  215342 start.go:141] virtualization:  
	I1018 18:24:45.940908  215342 out.go:179] * [newest-cni-530891] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 18:24:45.944292  215342 notify.go:220] Checking for updates...
	I1018 18:24:45.944894  215342 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 18:24:45.947949  215342 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 18:24:45.951253  215342 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:24:45.954244  215342 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 18:24:45.957079  215342 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 18:24:45.960292  215342 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 18:24:45.963749  215342 config.go:182] Loaded profile config "no-preload-729957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:24:45.963875  215342 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 18:24:46.015629  215342 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 18:24:46.015801  215342 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:24:46.138703  215342 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-18 18:24:46.126579734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:24:46.138809  215342 docker.go:318] overlay module found
	I1018 18:24:46.141923  215342 out.go:179] * Using the docker driver based on user configuration
	I1018 18:24:42.730436  211246 out.go:252]   - Booting up control plane ...
	I1018 18:24:42.730543  211246 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 18:24:42.730625  211246 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 18:24:42.732053  211246 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 18:24:42.758979  211246 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 18:24:42.759091  211246 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 18:24:42.773923  211246 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 18:24:42.774281  211246 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 18:24:42.774331  211246 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 18:24:42.986560  211246 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 18:24:42.986687  211246 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 18:24:43.494294  211246 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 506.537087ms
	I1018 18:24:43.503411  211246 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 18:24:43.503813  211246 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1018 18:24:43.504143  211246 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 18:24:43.506779  211246 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 18:24:46.144693  215342 start.go:305] selected driver: docker
	I1018 18:24:46.144706  215342 start.go:925] validating driver "docker" against <nil>
	I1018 18:24:46.144719  215342 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 18:24:46.145495  215342 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:24:46.255165  215342 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-18 18:24:46.242928815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:24:46.255316  215342 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1018 18:24:46.255344  215342 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1018 18:24:46.255580  215342 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 18:24:46.258752  215342 out.go:179] * Using Docker driver with root privileges
	I1018 18:24:46.261553  215342 cni.go:84] Creating CNI manager for ""
	I1018 18:24:46.261623  215342 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:24:46.261637  215342 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 18:24:46.261723  215342 start.go:349] cluster config:
	{Name:newest-cni-530891 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-530891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:24:46.266602  215342 out.go:179] * Starting "newest-cni-530891" primary control-plane node in "newest-cni-530891" cluster
	I1018 18:24:46.269544  215342 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 18:24:46.272492  215342 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 18:24:46.275195  215342 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:24:46.275261  215342 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 18:24:46.275276  215342 cache.go:58] Caching tarball of preloaded images
	I1018 18:24:46.275361  215342 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 18:24:46.275380  215342 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 18:24:46.275497  215342 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/config.json ...
	I1018 18:24:46.275520  215342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/config.json: {Name:mk5c50712877ef8c2e83788190119601f25e9ded Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:46.275690  215342 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 18:24:46.325532  215342 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 18:24:46.325552  215342 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 18:24:46.325573  215342 cache.go:232] Successfully downloaded all kic artifacts
	I1018 18:24:46.325595  215342 start.go:360] acquireMachinesLock for newest-cni-530891: {Name:mk0c4ba013544ae9a143d95908b1cd72d649cb51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:24:46.325708  215342 start.go:364] duration metric: took 98.709µs to acquireMachinesLock for "newest-cni-530891"
	I1018 18:24:46.325733  215342 start.go:93] Provisioning new machine with config: &{Name:newest-cni-530891 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-530891 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 18:24:46.325809  215342 start.go:125] createHost starting for "" (driver="docker")
	I1018 18:24:46.329254  215342 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 18:24:46.329501  215342 start.go:159] libmachine.API.Create for "newest-cni-530891" (driver="docker")
	I1018 18:24:46.329548  215342 client.go:168] LocalClient.Create starting
	I1018 18:24:46.329620  215342 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem
	I1018 18:24:46.329655  215342 main.go:141] libmachine: Decoding PEM data...
	I1018 18:24:46.329668  215342 main.go:141] libmachine: Parsing certificate...
	I1018 18:24:46.329724  215342 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem
	I1018 18:24:46.329740  215342 main.go:141] libmachine: Decoding PEM data...
	I1018 18:24:46.329754  215342 main.go:141] libmachine: Parsing certificate...
	I1018 18:24:46.330107  215342 cli_runner.go:164] Run: docker network inspect newest-cni-530891 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 18:24:46.354062  215342 cli_runner.go:211] docker network inspect newest-cni-530891 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 18:24:46.354142  215342 network_create.go:284] running [docker network inspect newest-cni-530891] to gather additional debugging logs...
	I1018 18:24:46.354163  215342 cli_runner.go:164] Run: docker network inspect newest-cni-530891
	W1018 18:24:46.380902  215342 cli_runner.go:211] docker network inspect newest-cni-530891 returned with exit code 1
	I1018 18:24:46.380947  215342 network_create.go:287] error running [docker network inspect newest-cni-530891]: docker network inspect newest-cni-530891: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-530891 not found
	I1018 18:24:46.380961  215342 network_create.go:289] output of [docker network inspect newest-cni-530891]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-530891 not found
	
	** /stderr **
	I1018 18:24:46.381070  215342 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:24:46.410899  215342 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-903568cdf824 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:7a:80:c0:8c:ed} reservation:<nil>}
	I1018 18:24:46.411239  215342 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ee9fcaab9ca8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:a7:65:1b:c0:41} reservation:<nil>}
	I1018 18:24:46.411568  215342 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-414fc11e154b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:86:f0:a8:1a:86:00} reservation:<nil>}
	I1018 18:24:46.411832  215342 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-9171cfee9247 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e6:21:8a:96:2d:4e} reservation:<nil>}
	I1018 18:24:46.412240  215342 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a23510}
	I1018 18:24:46.412266  215342 network_create.go:124] attempt to create docker network newest-cni-530891 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 18:24:46.412324  215342 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-530891 newest-cni-530891
	I1018 18:24:46.507412  215342 network_create.go:108] docker network newest-cni-530891 192.168.85.0/24 created
	I1018 18:24:46.507449  215342 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-530891" container
	I1018 18:24:46.507539  215342 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 18:24:46.557380  215342 cli_runner.go:164] Run: docker volume create newest-cni-530891 --label name.minikube.sigs.k8s.io=newest-cni-530891 --label created_by.minikube.sigs.k8s.io=true
	I1018 18:24:46.588225  215342 oci.go:103] Successfully created a docker volume newest-cni-530891
	I1018 18:24:46.588322  215342 cli_runner.go:164] Run: docker run --rm --name newest-cni-530891-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-530891 --entrypoint /usr/bin/test -v newest-cni-530891:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 18:24:47.355861  215342 oci.go:107] Successfully prepared a docker volume newest-cni-530891
	I1018 18:24:47.355903  215342 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:24:47.355922  215342 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 18:24:47.355983  215342 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-530891:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 18:24:47.791431  211246 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.283724914s
	I1018 18:24:51.110185  211246 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.602905366s
	I1018 18:24:53.511136  211246 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.003651368s
	I1018 18:24:53.617528  211246 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 18:24:53.656162  211246 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 18:24:53.685048  211246 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 18:24:53.685495  211246 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-729957 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 18:24:53.711832  211246 kubeadm.go:318] [bootstrap-token] Using token: 1orxdi.912wtn2m7d6gr5u8
	I1018 18:24:53.714768  211246 out.go:252]   - Configuring RBAC rules ...
	I1018 18:24:53.714896  211246 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 18:24:53.743390  211246 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 18:24:53.771764  211246 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 18:24:53.781347  211246 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 18:24:53.792076  211246 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 18:24:53.805584  211246 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 18:24:53.931799  211246 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 18:24:54.570379  211246 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 18:24:54.922190  211246 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 18:24:54.923347  211246 kubeadm.go:318] 
	I1018 18:24:54.923452  211246 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 18:24:54.923459  211246 kubeadm.go:318] 
	I1018 18:24:54.923540  211246 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 18:24:54.923545  211246 kubeadm.go:318] 
	I1018 18:24:54.923572  211246 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 18:24:54.923641  211246 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 18:24:54.923694  211246 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 18:24:54.923699  211246 kubeadm.go:318] 
	I1018 18:24:54.923756  211246 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 18:24:54.923760  211246 kubeadm.go:318] 
	I1018 18:24:54.923810  211246 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 18:24:54.923815  211246 kubeadm.go:318] 
	I1018 18:24:54.923869  211246 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 18:24:54.923948  211246 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 18:24:54.924019  211246 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 18:24:54.924024  211246 kubeadm.go:318] 
	I1018 18:24:54.924112  211246 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 18:24:54.924196  211246 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 18:24:54.924201  211246 kubeadm.go:318] 
	I1018 18:24:54.924293  211246 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 1orxdi.912wtn2m7d6gr5u8 \
	I1018 18:24:54.924401  211246 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d0244c5bf86cdf97546c6a22045cb6ed9d7ead524d9c98d9ca35da77d5d7a04d \
	I1018 18:24:54.924422  211246 kubeadm.go:318] 	--control-plane 
	I1018 18:24:54.924427  211246 kubeadm.go:318] 
	I1018 18:24:54.924515  211246 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 18:24:54.924519  211246 kubeadm.go:318] 
	I1018 18:24:54.924604  211246 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 1orxdi.912wtn2m7d6gr5u8 \
	I1018 18:24:54.924719  211246 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d0244c5bf86cdf97546c6a22045cb6ed9d7ead524d9c98d9ca35da77d5d7a04d 
	I1018 18:24:54.931528  211246 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 18:24:54.931767  211246 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 18:24:54.931876  211246 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 18:24:54.931954  211246 cni.go:84] Creating CNI manager for ""
	I1018 18:24:54.931965  211246 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:24:54.935698  211246 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 18:24:51.859966  215342 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-530891:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.503943282s)
	I1018 18:24:51.859995  215342 kic.go:203] duration metric: took 4.504070151s to extract preloaded images to volume ...
	W1018 18:24:51.860161  215342 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 18:24:51.860262  215342 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 18:24:51.944064  215342 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-530891 --name newest-cni-530891 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-530891 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-530891 --network newest-cni-530891 --ip 192.168.85.2 --volume newest-cni-530891:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 18:24:52.388897  215342 cli_runner.go:164] Run: docker container inspect newest-cni-530891 --format={{.State.Running}}
	I1018 18:24:52.414787  215342 cli_runner.go:164] Run: docker container inspect newest-cni-530891 --format={{.State.Status}}
	I1018 18:24:52.442620  215342 cli_runner.go:164] Run: docker exec newest-cni-530891 stat /var/lib/dpkg/alternatives/iptables
	I1018 18:24:52.510078  215342 oci.go:144] the created container "newest-cni-530891" has a running status.
	I1018 18:24:52.510112  215342 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/newest-cni-530891/id_rsa...
	I1018 18:24:54.758513  215342 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-2509/.minikube/machines/newest-cni-530891/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 18:24:54.785747  215342 cli_runner.go:164] Run: docker container inspect newest-cni-530891 --format={{.State.Status}}
	I1018 18:24:54.803882  215342 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 18:24:54.803907  215342 kic_runner.go:114] Args: [docker exec --privileged newest-cni-530891 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 18:24:54.886757  215342 cli_runner.go:164] Run: docker container inspect newest-cni-530891 --format={{.State.Status}}
	I1018 18:24:54.917955  215342 machine.go:93] provisionDockerMachine start ...
	I1018 18:24:54.918087  215342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-530891
	I1018 18:24:54.950227  215342 main.go:141] libmachine: Using SSH client type: native
	I1018 18:24:54.950595  215342 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1018 18:24:54.950615  215342 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 18:24:55.121041  215342 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-530891
	
	I1018 18:24:55.121118  215342 ubuntu.go:182] provisioning hostname "newest-cni-530891"
	I1018 18:24:55.121215  215342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-530891
	I1018 18:24:55.153287  215342 main.go:141] libmachine: Using SSH client type: native
	I1018 18:24:55.153591  215342 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1018 18:24:55.153603  215342 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-530891 && echo "newest-cni-530891" | sudo tee /etc/hostname
	I1018 18:24:55.339475  215342 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-530891
	
	I1018 18:24:55.339581  215342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-530891
	I1018 18:24:55.366931  215342 main.go:141] libmachine: Using SSH client type: native
	I1018 18:24:55.367246  215342 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1018 18:24:55.367274  215342 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-530891' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-530891/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-530891' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 18:24:55.526439  215342 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 18:24:55.526468  215342 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 18:24:55.526537  215342 ubuntu.go:190] setting up certificates
	I1018 18:24:55.526554  215342 provision.go:84] configureAuth start
	I1018 18:24:55.526632  215342 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-530891
	I1018 18:24:55.553249  215342 provision.go:143] copyHostCerts
	I1018 18:24:55.553319  215342 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 18:24:55.553333  215342 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 18:24:55.553422  215342 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 18:24:55.553521  215342 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 18:24:55.553531  215342 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 18:24:55.553558  215342 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 18:24:55.553613  215342 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 18:24:55.553627  215342 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 18:24:55.553651  215342 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 18:24:55.553700  215342 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.newest-cni-530891 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-530891]
	I1018 18:24:54.938776  211246 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 18:24:54.946051  211246 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 18:24:54.946069  211246 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 18:24:54.986484  211246 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 18:24:55.486636  211246 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 18:24:55.486784  211246 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:24:55.486866  211246 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-729957 minikube.k8s.io/updated_at=2025_10_18T18_24_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404 minikube.k8s.io/name=no-preload-729957 minikube.k8s.io/primary=true
	I1018 18:24:55.792570  211246 ops.go:34] apiserver oom_adj: -16
	I1018 18:24:55.792713  211246 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:24:56.523844  215342 provision.go:177] copyRemoteCerts
	I1018 18:24:56.523917  215342 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 18:24:56.523986  215342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-530891
	I1018 18:24:56.543208  215342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/newest-cni-530891/id_rsa Username:docker}
	I1018 18:24:56.653678  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 18:24:56.673832  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 18:24:56.695486  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 18:24:56.716300  215342 provision.go:87] duration metric: took 1.189723873s to configureAuth
	I1018 18:24:56.716325  215342 ubuntu.go:206] setting minikube options for container-runtime
	I1018 18:24:56.716526  215342 config.go:182] Loaded profile config "newest-cni-530891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:24:56.716639  215342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-530891
	I1018 18:24:56.734455  215342 main.go:141] libmachine: Using SSH client type: native
	I1018 18:24:56.734755  215342 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1018 18:24:56.734774  215342 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 18:24:57.028752  215342 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 18:24:57.028782  215342 machine.go:96] duration metric: took 2.110790442s to provisionDockerMachine
	I1018 18:24:57.028792  215342 client.go:171] duration metric: took 10.699237032s to LocalClient.Create
	I1018 18:24:57.028806  215342 start.go:167] duration metric: took 10.69930562s to libmachine.API.Create "newest-cni-530891"
	I1018 18:24:57.028813  215342 start.go:293] postStartSetup for "newest-cni-530891" (driver="docker")
	I1018 18:24:57.028823  215342 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 18:24:57.028899  215342 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 18:24:57.028975  215342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-530891
	I1018 18:24:57.049346  215342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/newest-cni-530891/id_rsa Username:docker}
	I1018 18:24:57.157817  215342 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 18:24:57.161275  215342 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 18:24:57.161306  215342 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 18:24:57.161321  215342 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 18:24:57.161377  215342 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 18:24:57.161460  215342 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 18:24:57.161565  215342 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 18:24:57.169000  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:24:57.186452  215342 start.go:296] duration metric: took 157.624402ms for postStartSetup
	I1018 18:24:57.186812  215342 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-530891
	I1018 18:24:57.205068  215342 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/config.json ...
	I1018 18:24:57.205361  215342 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 18:24:57.205427  215342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-530891
	I1018 18:24:57.223454  215342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/newest-cni-530891/id_rsa Username:docker}
	I1018 18:24:57.327080  215342 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 18:24:57.332503  215342 start.go:128] duration metric: took 11.006678228s to createHost
	I1018 18:24:57.332568  215342 start.go:83] releasing machines lock for "newest-cni-530891", held for 11.006849988s
	I1018 18:24:57.332665  215342 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-530891
	I1018 18:24:57.353506  215342 ssh_runner.go:195] Run: cat /version.json
	I1018 18:24:57.353555  215342 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 18:24:57.353618  215342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-530891
	I1018 18:24:57.353561  215342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-530891
	I1018 18:24:57.402424  215342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/newest-cni-530891/id_rsa Username:docker}
	I1018 18:24:57.414590  215342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/newest-cni-530891/id_rsa Username:docker}
	I1018 18:24:57.621798  215342 ssh_runner.go:195] Run: systemctl --version
	I1018 18:24:57.628577  215342 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 18:24:57.666794  215342 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 18:24:57.671441  215342 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 18:24:57.671531  215342 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 18:24:57.704094  215342 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 18:24:57.704129  215342 start.go:495] detecting cgroup driver to use...
	I1018 18:24:57.704178  215342 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 18:24:57.704254  215342 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 18:24:57.723105  215342 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 18:24:57.736738  215342 docker.go:218] disabling cri-docker service (if available) ...
	I1018 18:24:57.736808  215342 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 18:24:57.755446  215342 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 18:24:57.776315  215342 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 18:24:57.937793  215342 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 18:24:58.093115  215342 docker.go:234] disabling docker service ...
	I1018 18:24:58.093235  215342 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 18:24:58.118016  215342 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 18:24:58.132556  215342 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 18:24:58.260457  215342 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 18:24:58.421965  215342 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 18:24:58.436494  215342 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 18:24:58.457029  215342 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 18:24:58.457135  215342 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:58.489681  215342 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 18:24:58.489787  215342 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:58.503613  215342 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:58.516829  215342 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:58.532562  215342 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 18:24:58.548395  215342 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:58.564511  215342 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:58.590389  215342 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:24:58.609218  215342 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 18:24:58.622461  215342 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 18:24:58.635883  215342 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:24:58.766382  215342 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 18:24:58.923839  215342 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 18:24:58.923950  215342 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 18:24:58.928370  215342 start.go:563] Will wait 60s for crictl version
	I1018 18:24:58.928464  215342 ssh_runner.go:195] Run: which crictl
	I1018 18:24:58.932009  215342 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 18:24:58.960459  215342 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 18:24:58.960606  215342 ssh_runner.go:195] Run: crio --version
	I1018 18:24:58.989973  215342 ssh_runner.go:195] Run: crio --version
	I1018 18:24:59.027520  215342 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 18:24:59.030427  215342 cli_runner.go:164] Run: docker network inspect newest-cni-530891 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:24:59.049026  215342 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 18:24:59.053584  215342 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:24:59.066670  215342 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 18:24:56.293340  211246 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:24:56.793646  211246 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:24:57.293061  211246 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:24:57.793100  211246 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:24:58.293712  211246 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:24:58.792811  211246 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:24:59.293138  211246 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:24:59.498371  211246 kubeadm.go:1113] duration metric: took 4.011641283s to wait for elevateKubeSystemPrivileges
	I1018 18:24:59.498399  211246 kubeadm.go:402] duration metric: took 24.387075148s to StartCluster
	I1018 18:24:59.498431  211246 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:59.498489  211246 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:24:59.499116  211246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:59.499328  211246 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 18:24:59.499425  211246 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 18:24:59.499640  211246 config.go:182] Loaded profile config "no-preload-729957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:24:59.499670  211246 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 18:24:59.499726  211246 addons.go:69] Setting storage-provisioner=true in profile "no-preload-729957"
	I1018 18:24:59.499740  211246 addons.go:238] Setting addon storage-provisioner=true in "no-preload-729957"
	I1018 18:24:59.499760  211246 host.go:66] Checking if "no-preload-729957" exists ...
	I1018 18:24:59.500236  211246 cli_runner.go:164] Run: docker container inspect no-preload-729957 --format={{.State.Status}}
	I1018 18:24:59.500651  211246 addons.go:69] Setting default-storageclass=true in profile "no-preload-729957"
	I1018 18:24:59.500674  211246 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-729957"
	I1018 18:24:59.500977  211246 cli_runner.go:164] Run: docker container inspect no-preload-729957 --format={{.State.Status}}
	I1018 18:24:59.503260  211246 out.go:179] * Verifying Kubernetes components...
	I1018 18:24:59.506966  211246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:24:59.535547  211246 addons.go:238] Setting addon default-storageclass=true in "no-preload-729957"
	I1018 18:24:59.535585  211246 host.go:66] Checking if "no-preload-729957" exists ...
	I1018 18:24:59.535992  211246 cli_runner.go:164] Run: docker container inspect no-preload-729957 --format={{.State.Status}}
	I1018 18:24:59.544152  211246 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:24:59.069396  215342 kubeadm.go:883] updating cluster {Name:newest-cni-530891 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-530891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 18:24:59.069559  215342 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:24:59.069660  215342 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:24:59.105741  215342 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:24:59.105764  215342 crio.go:433] Images already preloaded, skipping extraction
	I1018 18:24:59.105822  215342 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:24:59.130534  215342 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:24:59.130557  215342 cache_images.go:85] Images are preloaded, skipping loading
	I1018 18:24:59.130565  215342 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 18:24:59.130650  215342 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-530891 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-530891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 18:24:59.130735  215342 ssh_runner.go:195] Run: crio config
	I1018 18:24:59.238064  215342 cni.go:84] Creating CNI manager for ""
	I1018 18:24:59.238136  215342 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:24:59.238166  215342 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 18:24:59.238224  215342 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-530891 NodeName:newest-cni-530891 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 18:24:59.238393  215342 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-530891"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 18:24:59.238496  215342 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 18:24:59.246986  215342 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 18:24:59.247101  215342 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 18:24:59.255010  215342 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 18:24:59.270921  215342 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 18:24:59.283902  215342 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1018 18:24:59.303733  215342 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 18:24:59.308014  215342 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:24:59.320818  215342 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:24:59.541695  215342 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:24:59.581016  215342 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891 for IP: 192.168.85.2
	I1018 18:24:59.581035  215342 certs.go:195] generating shared ca certs ...
	I1018 18:24:59.581057  215342 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:59.581194  215342 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 18:24:59.581235  215342 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 18:24:59.581242  215342 certs.go:257] generating profile certs ...
	I1018 18:24:59.581304  215342 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/client.key
	I1018 18:24:59.581314  215342 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/client.crt with IP's: []
	I1018 18:24:59.775607  215342 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/client.crt ...
	I1018 18:24:59.775644  215342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/client.crt: {Name:mkfb2b01a029c3ec7d8b39650689a2841c96b5f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:59.775826  215342 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/client.key ...
	I1018 18:24:59.775841  215342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/client.key: {Name:mkf199f78dd53c80a75b60e7356f06520d4d7edc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:24:59.775923  215342 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/apiserver.key.41f4075b
	I1018 18:24:59.775942  215342 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/apiserver.crt.41f4075b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1018 18:25:00.329130  215342 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/apiserver.crt.41f4075b ...
	I1018 18:25:00.329167  215342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/apiserver.crt.41f4075b: {Name:mkf365ffd4125c8bbfe53ccf847577d844693ef1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:25:00.329365  215342 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/apiserver.key.41f4075b ...
	I1018 18:25:00.329524  215342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/apiserver.key.41f4075b: {Name:mk72bdf1046f6c61552d02a1873a3f73ba03738f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:25:00.329656  215342 certs.go:382] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/apiserver.crt.41f4075b -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/apiserver.crt
	I1018 18:25:00.329762  215342 certs.go:386] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/apiserver.key.41f4075b -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/apiserver.key
	I1018 18:25:00.329878  215342 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/proxy-client.key
	I1018 18:25:00.329906  215342 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/proxy-client.crt with IP's: []
	I1018 18:25:00.660039  215342 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/proxy-client.crt ...
	I1018 18:25:00.660114  215342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/proxy-client.crt: {Name:mk536eef0d60632d58f25f2d2097a2e43686c535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:25:00.660387  215342 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/proxy-client.key ...
	I1018 18:25:00.660432  215342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/proxy-client.key: {Name:mk00310e4828ea5b36061eb09117b8b053f89c9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:25:00.660704  215342 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 18:25:00.660828  215342 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 18:25:00.660862  215342 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 18:25:00.660911  215342 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 18:25:00.661627  215342 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 18:25:00.661695  215342 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 18:25:00.661801  215342 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:25:00.662482  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 18:25:00.686788  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 18:25:00.709540  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 18:25:00.743342  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 18:25:00.781312  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 18:25:00.812628  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 18:25:00.838335  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 18:25:00.865025  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/newest-cni-530891/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 18:25:00.898136  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 18:25:00.922470  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 18:24:59.547805  211246 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:24:59.547826  211246 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 18:24:59.547890  211246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:24:59.577806  211246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa Username:docker}
	I1018 18:24:59.616346  211246 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 18:24:59.616375  211246 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 18:24:59.616440  211246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:24:59.701348  211246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa Username:docker}
	I1018 18:25:00.335110  211246 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 18:25:00.403928  211246 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:25:00.404218  211246 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 18:25:00.545349  211246 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:25:00.957364  215342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 18:25:00.982181  215342 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 18:25:01.002053  215342 ssh_runner.go:195] Run: openssl version
	I1018 18:25:01.012526  215342 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 18:25:01.023410  215342 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:25:01.030403  215342 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:25:01.030518  215342 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:25:01.080814  215342 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 18:25:01.095942  215342 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 18:25:01.106445  215342 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 18:25:01.113684  215342 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 18:25:01.113805  215342 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 18:25:01.163270  215342 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 18:25:01.173116  215342 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 18:25:01.183722  215342 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 18:25:01.190227  215342 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 18:25:01.190397  215342 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 18:25:01.237658  215342 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 18:25:01.247340  215342 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 18:25:01.254008  215342 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 18:25:01.254122  215342 kubeadm.go:400] StartCluster: {Name:newest-cni-530891 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-530891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:25:01.254281  215342 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 18:25:01.254378  215342 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 18:25:01.305658  215342 cri.go:89] found id: ""
	I1018 18:25:01.305786  215342 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 18:25:01.317661  215342 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 18:25:01.333654  215342 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 18:25:01.333765  215342 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 18:25:01.347311  215342 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 18:25:01.347382  215342 kubeadm.go:157] found existing configuration files:
	
	I1018 18:25:01.347467  215342 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 18:25:01.358918  215342 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 18:25:01.359028  215342 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 18:25:01.371893  215342 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 18:25:01.387152  215342 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 18:25:01.387273  215342 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 18:25:01.400682  215342 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 18:25:01.414935  215342 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 18:25:01.415052  215342 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 18:25:01.429061  215342 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 18:25:01.447607  215342 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 18:25:01.447726  215342 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 18:25:01.463674  215342 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 18:25:01.611790  215342 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 18:25:01.618165  215342 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 18:25:01.675624  215342 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 18:25:01.675787  215342 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 18:25:01.675859  215342 kubeadm.go:318] OS: Linux
	I1018 18:25:01.675969  215342 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 18:25:01.676044  215342 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 18:25:01.676127  215342 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 18:25:01.676213  215342 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 18:25:01.676296  215342 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 18:25:01.676378  215342 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 18:25:01.676459  215342 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 18:25:01.676542  215342 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 18:25:01.676647  215342 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 18:25:01.821917  215342 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 18:25:01.822089  215342 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 18:25:01.822219  215342 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 18:25:01.843815  215342 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 18:25:02.085868  211246 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.750659743s)
	I1018 18:25:02.274554  211246 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.870285995s)
	I1018 18:25:02.274593  211246 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1018 18:25:02.275680  211246 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.871580042s)
	I1018 18:25:02.276438  211246 node_ready.go:35] waiting up to 6m0s for node "no-preload-729957" to be "Ready" ...
	I1018 18:25:02.794828  211246 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-729957" context rescaled to 1 replicas
	I1018 18:25:02.856762  211246 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.311374127s)
	I1018 18:25:02.860011  211246 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1018 18:25:01.847885  215342 out.go:252]   - Generating certificates and keys ...
	I1018 18:25:01.847982  215342 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 18:25:01.848059  215342 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 18:25:02.147067  215342 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 18:25:02.728274  215342 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 18:25:02.912859  215342 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 18:25:03.892952  215342 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 18:25:04.263331  215342 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 18:25:04.263952  215342 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-530891] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 18:25:04.473330  215342 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 18:25:04.473949  215342 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-530891] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 18:25:04.816786  215342 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 18:25:05.522094  215342 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 18:25:02.862939  211246 addons.go:514] duration metric: took 3.363237374s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1018 18:25:04.279583  211246 node_ready.go:57] node "no-preload-729957" has "Ready":"False" status (will retry)
	I1018 18:25:05.974571  215342 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 18:25:05.974984  215342 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 18:25:06.723520  215342 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 18:25:08.186598  215342 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 18:25:08.429335  215342 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 18:25:08.643204  215342 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 18:25:09.054377  215342 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 18:25:09.055075  215342 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 18:25:09.059957  215342 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 18:25:09.063476  215342 out.go:252]   - Booting up control plane ...
	I1018 18:25:09.063581  215342 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 18:25:09.063667  215342 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 18:25:09.064325  215342 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 18:25:09.079778  215342 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 18:25:09.079895  215342 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 18:25:09.087313  215342 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 18:25:09.087765  215342 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 18:25:09.087998  215342 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 18:25:09.221506  215342 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 18:25:09.221634  215342 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1018 18:25:06.280922  211246 node_ready.go:57] node "no-preload-729957" has "Ready":"False" status (will retry)
	W1018 18:25:08.780474  211246 node_ready.go:57] node "no-preload-729957" has "Ready":"False" status (will retry)
	I1018 18:25:11.221434  215342 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.000790529s
	I1018 18:25:11.224741  215342 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 18:25:11.224841  215342 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1018 18:25:11.225181  215342 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 18:25:11.225277  215342 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 18:25:13.889564  215342 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.664362067s
	W1018 18:25:11.279934  211246 node_ready.go:57] node "no-preload-729957" has "Ready":"False" status (will retry)
	W1018 18:25:13.779444  211246 node_ready.go:57] node "no-preload-729957" has "Ready":"False" status (will retry)
	W1018 18:25:15.780007  211246 node_ready.go:57] node "no-preload-729957" has "Ready":"False" status (will retry)
	I1018 18:25:17.476339  215342 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.251561218s
	I1018 18:25:17.727118  215342 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502142394s
	I1018 18:25:17.748931  215342 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 18:25:17.768324  215342 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 18:25:17.779358  215342 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 18:25:17.779564  215342 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-530891 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 18:25:17.792355  215342 kubeadm.go:318] [bootstrap-token] Using token: 5rfsqk.2p7sczsef9jhpde8
	I1018 18:25:16.281790  211246 node_ready.go:49] node "no-preload-729957" is "Ready"
	I1018 18:25:16.281820  211246 node_ready.go:38] duration metric: took 14.005358073s for node "no-preload-729957" to be "Ready" ...
	I1018 18:25:16.281833  211246 api_server.go:52] waiting for apiserver process to appear ...
	I1018 18:25:16.281895  211246 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 18:25:16.316789  211246 api_server.go:72] duration metric: took 16.817433135s to wait for apiserver process to appear ...
	I1018 18:25:16.316812  211246 api_server.go:88] waiting for apiserver healthz status ...
	I1018 18:25:16.316830  211246 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 18:25:16.329010  211246 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 18:25:16.330363  211246 api_server.go:141] control plane version: v1.34.1
	I1018 18:25:16.330384  211246 api_server.go:131] duration metric: took 13.565549ms to wait for apiserver health ...
	I1018 18:25:16.330393  211246 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 18:25:16.337413  211246 system_pods.go:59] 8 kube-system pods found
	I1018 18:25:16.337445  211246 system_pods.go:61] "coredns-66bc5c9577-q7mng" [365b51ac-c2aa-4247-a37e-ef5ce5d54a36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:25:16.337452  211246 system_pods.go:61] "etcd-no-preload-729957" [29023f58-84ea-44ad-b6e8-cc5cf720a4be] Running
	I1018 18:25:16.337458  211246 system_pods.go:61] "kindnet-4hbt7" [6c9fa05f-7c37-442d-b3fa-ee037c865d3e] Running
	I1018 18:25:16.337463  211246 system_pods.go:61] "kube-apiserver-no-preload-729957" [ea721a8e-b407-4422-b1c1-dc40032787ee] Running
	I1018 18:25:16.337468  211246 system_pods.go:61] "kube-controller-manager-no-preload-729957" [bf889e9e-777e-403a-b4ef-3582a86bafbb] Running
	I1018 18:25:16.337472  211246 system_pods.go:61] "kube-proxy-75znn" [c6f7e4f1-ccc0-40c5-b449-fb42e743f373] Running
	I1018 18:25:16.337477  211246 system_pods.go:61] "kube-scheduler-no-preload-729957" [fa436526-c2f9-43b9-a48e-57dc63916082] Running
	I1018 18:25:16.337484  211246 system_pods.go:61] "storage-provisioner" [4bef6a17-c67c-4394-837e-c20c6378a6ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 18:25:16.337490  211246 system_pods.go:74] duration metric: took 7.091907ms to wait for pod list to return data ...
	I1018 18:25:16.337497  211246 default_sa.go:34] waiting for default service account to be created ...
	I1018 18:25:16.341174  211246 default_sa.go:45] found service account: "default"
	I1018 18:25:16.341249  211246 default_sa.go:55] duration metric: took 3.744852ms for default service account to be created ...
	I1018 18:25:16.341260  211246 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 18:25:16.344812  211246 system_pods.go:86] 8 kube-system pods found
	I1018 18:25:16.344842  211246 system_pods.go:89] "coredns-66bc5c9577-q7mng" [365b51ac-c2aa-4247-a37e-ef5ce5d54a36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:25:16.344848  211246 system_pods.go:89] "etcd-no-preload-729957" [29023f58-84ea-44ad-b6e8-cc5cf720a4be] Running
	I1018 18:25:16.344854  211246 system_pods.go:89] "kindnet-4hbt7" [6c9fa05f-7c37-442d-b3fa-ee037c865d3e] Running
	I1018 18:25:16.344859  211246 system_pods.go:89] "kube-apiserver-no-preload-729957" [ea721a8e-b407-4422-b1c1-dc40032787ee] Running
	I1018 18:25:16.344863  211246 system_pods.go:89] "kube-controller-manager-no-preload-729957" [bf889e9e-777e-403a-b4ef-3582a86bafbb] Running
	I1018 18:25:16.344867  211246 system_pods.go:89] "kube-proxy-75znn" [c6f7e4f1-ccc0-40c5-b449-fb42e743f373] Running
	I1018 18:25:16.344871  211246 system_pods.go:89] "kube-scheduler-no-preload-729957" [fa436526-c2f9-43b9-a48e-57dc63916082] Running
	I1018 18:25:16.344878  211246 system_pods.go:89] "storage-provisioner" [4bef6a17-c67c-4394-837e-c20c6378a6ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 18:25:16.344911  211246 retry.go:31] will retry after 310.808913ms: missing components: kube-dns
	I1018 18:25:16.660977  211246 system_pods.go:86] 8 kube-system pods found
	I1018 18:25:16.661009  211246 system_pods.go:89] "coredns-66bc5c9577-q7mng" [365b51ac-c2aa-4247-a37e-ef5ce5d54a36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:25:16.661017  211246 system_pods.go:89] "etcd-no-preload-729957" [29023f58-84ea-44ad-b6e8-cc5cf720a4be] Running
	I1018 18:25:16.661023  211246 system_pods.go:89] "kindnet-4hbt7" [6c9fa05f-7c37-442d-b3fa-ee037c865d3e] Running
	I1018 18:25:16.661027  211246 system_pods.go:89] "kube-apiserver-no-preload-729957" [ea721a8e-b407-4422-b1c1-dc40032787ee] Running
	I1018 18:25:16.661032  211246 system_pods.go:89] "kube-controller-manager-no-preload-729957" [bf889e9e-777e-403a-b4ef-3582a86bafbb] Running
	I1018 18:25:16.661036  211246 system_pods.go:89] "kube-proxy-75znn" [c6f7e4f1-ccc0-40c5-b449-fb42e743f373] Running
	I1018 18:25:16.661040  211246 system_pods.go:89] "kube-scheduler-no-preload-729957" [fa436526-c2f9-43b9-a48e-57dc63916082] Running
	I1018 18:25:16.661046  211246 system_pods.go:89] "storage-provisioner" [4bef6a17-c67c-4394-837e-c20c6378a6ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 18:25:16.661059  211246 retry.go:31] will retry after 269.256949ms: missing components: kube-dns
	I1018 18:25:16.936209  211246 system_pods.go:86] 8 kube-system pods found
	I1018 18:25:16.936239  211246 system_pods.go:89] "coredns-66bc5c9577-q7mng" [365b51ac-c2aa-4247-a37e-ef5ce5d54a36] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:25:16.936248  211246 system_pods.go:89] "etcd-no-preload-729957" [29023f58-84ea-44ad-b6e8-cc5cf720a4be] Running
	I1018 18:25:16.936255  211246 system_pods.go:89] "kindnet-4hbt7" [6c9fa05f-7c37-442d-b3fa-ee037c865d3e] Running
	I1018 18:25:16.936260  211246 system_pods.go:89] "kube-apiserver-no-preload-729957" [ea721a8e-b407-4422-b1c1-dc40032787ee] Running
	I1018 18:25:16.936265  211246 system_pods.go:89] "kube-controller-manager-no-preload-729957" [bf889e9e-777e-403a-b4ef-3582a86bafbb] Running
	I1018 18:25:16.936269  211246 system_pods.go:89] "kube-proxy-75znn" [c6f7e4f1-ccc0-40c5-b449-fb42e743f373] Running
	I1018 18:25:16.936273  211246 system_pods.go:89] "kube-scheduler-no-preload-729957" [fa436526-c2f9-43b9-a48e-57dc63916082] Running
	I1018 18:25:16.936279  211246 system_pods.go:89] "storage-provisioner" [4bef6a17-c67c-4394-837e-c20c6378a6ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 18:25:16.936293  211246 retry.go:31] will retry after 380.700224ms: missing components: kube-dns
	I1018 18:25:17.325267  211246 system_pods.go:86] 8 kube-system pods found
	I1018 18:25:17.325309  211246 system_pods.go:89] "coredns-66bc5c9577-q7mng" [365b51ac-c2aa-4247-a37e-ef5ce5d54a36] Running
	I1018 18:25:17.325317  211246 system_pods.go:89] "etcd-no-preload-729957" [29023f58-84ea-44ad-b6e8-cc5cf720a4be] Running
	I1018 18:25:17.325322  211246 system_pods.go:89] "kindnet-4hbt7" [6c9fa05f-7c37-442d-b3fa-ee037c865d3e] Running
	I1018 18:25:17.325326  211246 system_pods.go:89] "kube-apiserver-no-preload-729957" [ea721a8e-b407-4422-b1c1-dc40032787ee] Running
	I1018 18:25:17.325331  211246 system_pods.go:89] "kube-controller-manager-no-preload-729957" [bf889e9e-777e-403a-b4ef-3582a86bafbb] Running
	I1018 18:25:17.325335  211246 system_pods.go:89] "kube-proxy-75znn" [c6f7e4f1-ccc0-40c5-b449-fb42e743f373] Running
	I1018 18:25:17.325343  211246 system_pods.go:89] "kube-scheduler-no-preload-729957" [fa436526-c2f9-43b9-a48e-57dc63916082] Running
	I1018 18:25:17.325347  211246 system_pods.go:89] "storage-provisioner" [4bef6a17-c67c-4394-837e-c20c6378a6ed] Running
	I1018 18:25:17.325356  211246 system_pods.go:126] duration metric: took 984.088997ms to wait for k8s-apps to be running ...
	I1018 18:25:17.325366  211246 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 18:25:17.325444  211246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:25:17.351014  211246 system_svc.go:56] duration metric: took 25.635078ms WaitForService to wait for kubelet
	I1018 18:25:17.351055  211246 kubeadm.go:586] duration metric: took 17.851701666s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:25:17.351075  211246 node_conditions.go:102] verifying NodePressure condition ...
	I1018 18:25:17.356618  211246 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 18:25:17.356692  211246 node_conditions.go:123] node cpu capacity is 2
	I1018 18:25:17.356711  211246 node_conditions.go:105] duration metric: took 5.630087ms to run NodePressure ...
	I1018 18:25:17.356723  211246 start.go:241] waiting for startup goroutines ...
	I1018 18:25:17.356734  211246 start.go:246] waiting for cluster config update ...
	I1018 18:25:17.356745  211246 start.go:255] writing updated cluster config ...
	I1018 18:25:17.357185  211246 ssh_runner.go:195] Run: rm -f paused
	I1018 18:25:17.363349  211246 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:25:17.369122  211246 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q7mng" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:25:17.377540  211246 pod_ready.go:94] pod "coredns-66bc5c9577-q7mng" is "Ready"
	I1018 18:25:17.377646  211246 pod_ready.go:86] duration metric: took 8.417914ms for pod "coredns-66bc5c9577-q7mng" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:25:17.381935  211246 pod_ready.go:83] waiting for pod "etcd-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:25:17.389829  211246 pod_ready.go:94] pod "etcd-no-preload-729957" is "Ready"
	I1018 18:25:17.389930  211246 pod_ready.go:86] duration metric: took 7.909328ms for pod "etcd-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:25:17.394056  211246 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:25:17.402059  211246 pod_ready.go:94] pod "kube-apiserver-no-preload-729957" is "Ready"
	I1018 18:25:17.402146  211246 pod_ready.go:86] duration metric: took 8.008439ms for pod "kube-apiserver-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:25:17.407472  211246 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:25:17.767787  211246 pod_ready.go:94] pod "kube-controller-manager-no-preload-729957" is "Ready"
	I1018 18:25:17.767816  211246 pod_ready.go:86] duration metric: took 360.263833ms for pod "kube-controller-manager-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:25:17.967780  211246 pod_ready.go:83] waiting for pod "kube-proxy-75znn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:25:18.368094  211246 pod_ready.go:94] pod "kube-proxy-75znn" is "Ready"
	I1018 18:25:18.368168  211246 pod_ready.go:86] duration metric: took 400.362877ms for pod "kube-proxy-75znn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:25:18.568791  211246 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:25:18.967573  211246 pod_ready.go:94] pod "kube-scheduler-no-preload-729957" is "Ready"
	I1018 18:25:18.967602  211246 pod_ready.go:86] duration metric: took 398.736296ms for pod "kube-scheduler-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:25:18.967618  211246 pod_ready.go:40] duration metric: took 1.604161571s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:25:19.023503  211246 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 18:25:19.028722  211246 out.go:179] * Done! kubectl is now configured to use "no-preload-729957" cluster and "default" namespace by default
	I1018 18:25:17.795223  215342 out.go:252]   - Configuring RBAC rules ...
	I1018 18:25:17.795362  215342 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 18:25:17.808247  215342 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 18:25:17.817703  215342 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 18:25:17.822423  215342 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 18:25:17.826737  215342 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 18:25:17.833546  215342 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 18:25:18.136055  215342 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 18:25:18.590231  215342 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 18:25:19.138279  215342 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 18:25:19.139753  215342 kubeadm.go:318] 
	I1018 18:25:19.139872  215342 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 18:25:19.139921  215342 kubeadm.go:318] 
	I1018 18:25:19.140018  215342 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 18:25:19.140024  215342 kubeadm.go:318] 
	I1018 18:25:19.140051  215342 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 18:25:19.140330  215342 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 18:25:19.140408  215342 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 18:25:19.140420  215342 kubeadm.go:318] 
	I1018 18:25:19.140506  215342 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 18:25:19.140515  215342 kubeadm.go:318] 
	I1018 18:25:19.140583  215342 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 18:25:19.140597  215342 kubeadm.go:318] 
	I1018 18:25:19.140682  215342 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 18:25:19.140885  215342 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 18:25:19.141065  215342 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 18:25:19.141076  215342 kubeadm.go:318] 
	I1018 18:25:19.141224  215342 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 18:25:19.141319  215342 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 18:25:19.141326  215342 kubeadm.go:318] 
	I1018 18:25:19.141414  215342 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 5rfsqk.2p7sczsef9jhpde8 \
	I1018 18:25:19.141522  215342 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d0244c5bf86cdf97546c6a22045cb6ed9d7ead524d9c98d9ca35da77d5d7a04d \
	I1018 18:25:19.141544  215342 kubeadm.go:318] 	--control-plane 
	I1018 18:25:19.141548  215342 kubeadm.go:318] 
	I1018 18:25:19.141638  215342 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 18:25:19.141642  215342 kubeadm.go:318] 
	I1018 18:25:19.141728  215342 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 5rfsqk.2p7sczsef9jhpde8 \
	I1018 18:25:19.142092  215342 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d0244c5bf86cdf97546c6a22045cb6ed9d7ead524d9c98d9ca35da77d5d7a04d 
	I1018 18:25:19.148616  215342 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 18:25:19.148883  215342 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 18:25:19.149048  215342 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 18:25:19.150763  215342 cni.go:84] Creating CNI manager for ""
	I1018 18:25:19.150782  215342 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:25:19.156284  215342 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 18:25:19.159596  215342 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 18:25:19.170018  215342 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 18:25:19.170045  215342 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 18:25:19.187275  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 18:25:19.529975  215342 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 18:25:19.530128  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:25:19.530232  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-530891 minikube.k8s.io/updated_at=2025_10_18T18_25_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404 minikube.k8s.io/name=newest-cni-530891 minikube.k8s.io/primary=true
	I1018 18:25:19.546403  215342 ops.go:34] apiserver oom_adj: -16
	I1018 18:25:19.716145  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:25:20.216259  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:25:20.716283  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:25:21.216211  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:25:21.716673  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:25:22.216245  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:25:22.716412  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:25:23.217127  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:25:23.717125  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:25:24.217153  215342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:25:24.384312  215342 kubeadm.go:1113] duration metric: took 4.854232202s to wait for elevateKubeSystemPrivileges
	I1018 18:25:24.384345  215342 kubeadm.go:402] duration metric: took 23.130230042s to StartCluster
	I1018 18:25:24.384362  215342 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:25:24.384435  215342 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:25:24.385461  215342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:25:24.385698  215342 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 18:25:24.385706  215342 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 18:25:24.385959  215342 config.go:182] Loaded profile config "newest-cni-530891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:25:24.386001  215342 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 18:25:24.386071  215342 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-530891"
	I1018 18:25:24.386092  215342 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-530891"
	I1018 18:25:24.386118  215342 host.go:66] Checking if "newest-cni-530891" exists ...
	I1018 18:25:24.386565  215342 cli_runner.go:164] Run: docker container inspect newest-cni-530891 --format={{.State.Status}}
	I1018 18:25:24.387029  215342 addons.go:69] Setting default-storageclass=true in profile "newest-cni-530891"
	I1018 18:25:24.387054  215342 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-530891"
	I1018 18:25:24.387312  215342 cli_runner.go:164] Run: docker container inspect newest-cni-530891 --format={{.State.Status}}
	I1018 18:25:24.388985  215342 out.go:179] * Verifying Kubernetes components...
	I1018 18:25:24.393047  215342 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:25:24.434895  215342 addons.go:238] Setting addon default-storageclass=true in "newest-cni-530891"
	I1018 18:25:24.434933  215342 host.go:66] Checking if "newest-cni-530891" exists ...
	I1018 18:25:24.435350  215342 cli_runner.go:164] Run: docker container inspect newest-cni-530891 --format={{.State.Status}}
	I1018 18:25:24.435607  215342 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:25:24.438628  215342 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:25:24.438656  215342 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 18:25:24.438731  215342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-530891
	I1018 18:25:24.472035  215342 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 18:25:24.472055  215342 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 18:25:24.472117  215342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-530891
	I1018 18:25:24.497060  215342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/newest-cni-530891/id_rsa Username:docker}
	I1018 18:25:24.505708  215342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/newest-cni-530891/id_rsa Username:docker}
	I1018 18:25:24.786614  215342 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:25:24.789277  215342 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 18:25:24.789355  215342 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:25:24.836542  215342 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 18:25:25.539322  215342 api_server.go:52] waiting for apiserver process to appear ...
	I1018 18:25:25.539422  215342 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1018 18:25:25.540991  215342 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 18:25:25.568738  215342 api_server.go:72] duration metric: took 1.183004264s to wait for apiserver process to appear ...
	I1018 18:25:25.568758  215342 api_server.go:88] waiting for apiserver healthz status ...
	I1018 18:25:25.568777  215342 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 18:25:25.586653  215342 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 18:25:25.589312  215342 api_server.go:141] control plane version: v1.34.1
	I1018 18:25:25.589336  215342 api_server.go:131] duration metric: took 20.571284ms to wait for apiserver health ...
	I1018 18:25:25.589345  215342 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 18:25:25.589592  215342 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 18:25:25.592763  215342 addons.go:514] duration metric: took 1.206752401s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 18:25:25.594678  215342 system_pods.go:59] 9 kube-system pods found
	I1018 18:25:25.594746  215342 system_pods.go:61] "coredns-66bc5c9577-brzb4" [762df58f-b70f-479e-b130-07c24a8f3f51] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 18:25:25.594769  215342 system_pods.go:61] "coredns-66bc5c9577-zc9k5" [45c70073-29fd-4d82-9dfb-d20628d4a3de] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 18:25:25.594796  215342 system_pods.go:61] "etcd-newest-cni-530891" [1cf783e9-928f-47f5-be9d-4df2479e9b31] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 18:25:25.594826  215342 system_pods.go:61] "kindnet-497z4" [99e6305c-fb9e-4f10-9746-3dfdd03c570a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 18:25:25.594852  215342 system_pods.go:61] "kube-apiserver-newest-cni-530891" [b43d0e4b-98c3-4c5e-96dc-4ab8c7913e63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 18:25:25.594868  215342 system_pods.go:61] "kube-controller-manager-newest-cni-530891" [0d687e21-ef2f-4a67-94ea-d40750239b57] Running
	I1018 18:25:25.594896  215342 system_pods.go:61] "kube-proxy-k8ljb" [2f4233c2-bc5d-452a-84e3-875564801a54] Pending
	I1018 18:25:25.594917  215342 system_pods.go:61] "kube-scheduler-newest-cni-530891" [a81c1ce2-edc2-4f88-aebd-d06916133c38] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 18:25:25.594943  215342 system_pods.go:61] "storage-provisioner" [b2348e9f-6e43-4f09-a0c0-01ab697d968a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 18:25:25.594972  215342 system_pods.go:74] duration metric: took 5.62029ms to wait for pod list to return data ...
	I1018 18:25:25.594993  215342 default_sa.go:34] waiting for default service account to be created ...
	I1018 18:25:25.605830  215342 default_sa.go:45] found service account: "default"
	I1018 18:25:25.605903  215342 default_sa.go:55] duration metric: took 10.891118ms for default service account to be created ...
	I1018 18:25:25.605930  215342 kubeadm.go:586] duration metric: took 1.220200451s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 18:25:25.605983  215342 node_conditions.go:102] verifying NodePressure condition ...
	I1018 18:25:25.618693  215342 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 18:25:25.618774  215342 node_conditions.go:123] node cpu capacity is 2
	I1018 18:25:25.618812  215342 node_conditions.go:105] duration metric: took 12.80343ms to run NodePressure ...
	I1018 18:25:25.618836  215342 start.go:241] waiting for startup goroutines ...
	I1018 18:25:26.043788  215342 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-530891" context rescaled to 1 replicas
	I1018 18:25:26.043836  215342 start.go:246] waiting for cluster config update ...
	I1018 18:25:26.043869  215342 start.go:255] writing updated cluster config ...
	I1018 18:25:26.044199  215342 ssh_runner.go:195] Run: rm -f paused
	I1018 18:25:26.110679  215342 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 18:25:26.113893  215342 out.go:179] * Done! kubectl is now configured to use "newest-cni-530891" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 18:25:16 no-preload-729957 crio[841]: time="2025-10-18T18:25:16.693989685Z" level=info msg="Starting container: 7bc3c130ba074d8f92baa3df0b852710d531388fb1e9ead66551074c3fc207ee" id=0100c165-a8e4-4a3f-9522-f5288ffa7b2b name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:25:16 no-preload-729957 crio[841]: time="2025-10-18T18:25:16.694219826Z" level=info msg="Started container" PID=2478 containerID=624b38a0829e2dd96d812cb4e92f94b584a7e1ef1938c0458fac9d10ed6c59df description=kube-system/coredns-66bc5c9577-q7mng/coredns id=2db265a3-3931-4839-a276-f48ae5fdba0c name=/runtime.v1.RuntimeService/StartContainer sandboxID=9d8f8ae17302610809babb33e37afff898279974ddbc3b5637c2a5b8d04f8f79
	Oct 18 18:25:16 no-preload-729957 crio[841]: time="2025-10-18T18:25:16.697958352Z" level=info msg="Started container" PID=2477 containerID=7bc3c130ba074d8f92baa3df0b852710d531388fb1e9ead66551074c3fc207ee description=kube-system/storage-provisioner/storage-provisioner id=0100c165-a8e4-4a3f-9522-f5288ffa7b2b name=/runtime.v1.RuntimeService/StartContainer sandboxID=aab5c1bf23558e62f633cf87a565a8f4f458719e3141218f7875004d9f2b51f5
	Oct 18 18:25:19 no-preload-729957 crio[841]: time="2025-10-18T18:25:19.588652376Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b455f5ea-b928-4330-ad35-8ad22b9f8be9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 18:25:19 no-preload-729957 crio[841]: time="2025-10-18T18:25:19.588724475Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:19 no-preload-729957 crio[841]: time="2025-10-18T18:25:19.59619319Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7ff39ca547c8d86e1f040d60cae4cfee4422e4459ed6946cb5937c12380c760d UID:e89c4c23-75f1-45fd-a06e-77828509a4b3 NetNS:/var/run/netns/95870677-c3dc-4eb0-8cc4-9e6a7b3f8ef8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40016be7e0}] Aliases:map[]}"
	Oct 18 18:25:19 no-preload-729957 crio[841]: time="2025-10-18T18:25:19.596442375Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 18:25:19 no-preload-729957 crio[841]: time="2025-10-18T18:25:19.614610387Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7ff39ca547c8d86e1f040d60cae4cfee4422e4459ed6946cb5937c12380c760d UID:e89c4c23-75f1-45fd-a06e-77828509a4b3 NetNS:/var/run/netns/95870677-c3dc-4eb0-8cc4-9e6a7b3f8ef8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40016be7e0}] Aliases:map[]}"
	Oct 18 18:25:19 no-preload-729957 crio[841]: time="2025-10-18T18:25:19.614761913Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 18:25:19 no-preload-729957 crio[841]: time="2025-10-18T18:25:19.622692904Z" level=info msg="Ran pod sandbox 7ff39ca547c8d86e1f040d60cae4cfee4422e4459ed6946cb5937c12380c760d with infra container: default/busybox/POD" id=b455f5ea-b928-4330-ad35-8ad22b9f8be9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 18:25:19 no-preload-729957 crio[841]: time="2025-10-18T18:25:19.623747104Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3ad81c5f-d0ef-451d-90d4-9c5a0d627406 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:25:19 no-preload-729957 crio[841]: time="2025-10-18T18:25:19.623876418Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=3ad81c5f-d0ef-451d-90d4-9c5a0d627406 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:25:19 no-preload-729957 crio[841]: time="2025-10-18T18:25:19.623919972Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=3ad81c5f-d0ef-451d-90d4-9c5a0d627406 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:25:19 no-preload-729957 crio[841]: time="2025-10-18T18:25:19.624665778Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=595f37dc-8a72-492a-b1f9-3e52bc626126 name=/runtime.v1.ImageService/PullImage
	Oct 18 18:25:19 no-preload-729957 crio[841]: time="2025-10-18T18:25:19.626985964Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 18:25:21 no-preload-729957 crio[841]: time="2025-10-18T18:25:21.54608483Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=595f37dc-8a72-492a-b1f9-3e52bc626126 name=/runtime.v1.ImageService/PullImage
	Oct 18 18:25:21 no-preload-729957 crio[841]: time="2025-10-18T18:25:21.546754722Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5d2cb3af-3c73-4b3f-89b5-b070be1b3ed8 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:25:21 no-preload-729957 crio[841]: time="2025-10-18T18:25:21.549184439Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7d92998f-7898-435d-99c4-e6f65049bd19 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:25:21 no-preload-729957 crio[841]: time="2025-10-18T18:25:21.555404359Z" level=info msg="Creating container: default/busybox/busybox" id=84929b05-dac9-404f-8ad4-d84acf1ee7d6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:25:21 no-preload-729957 crio[841]: time="2025-10-18T18:25:21.556202917Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:21 no-preload-729957 crio[841]: time="2025-10-18T18:25:21.563376573Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:21 no-preload-729957 crio[841]: time="2025-10-18T18:25:21.563850294Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:21 no-preload-729957 crio[841]: time="2025-10-18T18:25:21.579101509Z" level=info msg="Created container 4569ac8cbb5d938783c4c5855927461276ff81f0dfabf8bd0325c4934209ae00: default/busybox/busybox" id=84929b05-dac9-404f-8ad4-d84acf1ee7d6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:25:21 no-preload-729957 crio[841]: time="2025-10-18T18:25:21.57999803Z" level=info msg="Starting container: 4569ac8cbb5d938783c4c5855927461276ff81f0dfabf8bd0325c4934209ae00" id=c75067da-9810-4a6e-a6cb-73f0c84edbf6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:25:21 no-preload-729957 crio[841]: time="2025-10-18T18:25:21.582193701Z" level=info msg="Started container" PID=2538 containerID=4569ac8cbb5d938783c4c5855927461276ff81f0dfabf8bd0325c4934209ae00 description=default/busybox/busybox id=c75067da-9810-4a6e-a6cb-73f0c84edbf6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ff39ca547c8d86e1f040d60cae4cfee4422e4459ed6946cb5937c12380c760d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	4569ac8cbb5d9       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   7ff39ca547c8d       busybox                                     default
	624b38a0829e2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago      Running             coredns                   0                   9d8f8ae173026       coredns-66bc5c9577-q7mng                    kube-system
	7bc3c130ba074       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      13 seconds ago      Running             storage-provisioner       0                   aab5c1bf23558       storage-provisioner                         kube-system
	37870c8c606b6       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   81e3e3ef99e9b       kindnet-4hbt7                               kube-system
	2e05cb62df3f4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      30 seconds ago      Running             kube-proxy                0                   029a5398af44f       kube-proxy-75znn                            kube-system
	5fbd928976845       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      46 seconds ago      Running             kube-scheduler            0                   2dbf45fc8abd9       kube-scheduler-no-preload-729957            kube-system
	4a34313e955f7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      46 seconds ago      Running             kube-apiserver            0                   c748a10122967       kube-apiserver-no-preload-729957            kube-system
	a1079809002e4       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      46 seconds ago      Running             etcd                      0                   012e348842252       etcd-no-preload-729957                      kube-system
	f8724ceac6cb8       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      46 seconds ago      Running             kube-controller-manager   0                   be7c0c1606948       kube-controller-manager-no-preload-729957   kube-system
	
	
	==> coredns [624b38a0829e2dd96d812cb4e92f94b584a7e1ef1938c0458fac9d10ed6c59df] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55488 - 52043 "HINFO IN 1708986225900047142.2723382927589152581. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026108677s
	
	
	==> describe nodes <==
	Name:               no-preload-729957
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-729957
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=no-preload-729957
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T18_24_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 18:24:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-729957
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 18:25:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 18:25:25 +0000   Sat, 18 Oct 2025 18:24:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 18:25:25 +0000   Sat, 18 Oct 2025 18:24:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 18:25:25 +0000   Sat, 18 Oct 2025 18:24:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 18:25:25 +0000   Sat, 18 Oct 2025 18:25:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-729957
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                767ca1b7-c7ba-48aa-bccb-3679302b1946
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-q7mng                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-no-preload-729957                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-4hbt7                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-no-preload-729957             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-729957    200m (10%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-75znn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-no-preload-729957             100m (5%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node no-preload-729957 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node no-preload-729957 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     47s (x8 over 47s)  kubelet          Node no-preload-729957 status is now: NodeHasSufficientPID
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node no-preload-729957 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node no-preload-729957 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node no-preload-729957 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           32s                node-controller  Node no-preload-729957 event: Registered Node no-preload-729957 in Controller
	  Normal   NodeReady                14s                kubelet          Node no-preload-729957 status is now: NodeReady
	
	
	==> dmesg <==
	[ +24.403909] overlayfs: idmapped layers are currently not supported
	[  +6.162774] overlayfs: idmapped layers are currently not supported
	[Oct18 18:05] overlayfs: idmapped layers are currently not supported
	[ +25.128760] overlayfs: idmapped layers are currently not supported
	[Oct18 18:06] overlayfs: idmapped layers are currently not supported
	[Oct18 18:07] overlayfs: idmapped layers are currently not supported
	[Oct18 18:08] overlayfs: idmapped layers are currently not supported
	[Oct18 18:09] overlayfs: idmapped layers are currently not supported
	[Oct18 18:11] overlayfs: idmapped layers are currently not supported
	[Oct18 18:13] overlayfs: idmapped layers are currently not supported
	[ +30.969240] overlayfs: idmapped layers are currently not supported
	[Oct18 18:15] overlayfs: idmapped layers are currently not supported
	[Oct18 18:16] overlayfs: idmapped layers are currently not supported
	[Oct18 18:17] overlayfs: idmapped layers are currently not supported
	[ +23.167826] overlayfs: idmapped layers are currently not supported
	[Oct18 18:18] overlayfs: idmapped layers are currently not supported
	[ +38.509809] overlayfs: idmapped layers are currently not supported
	[Oct18 18:19] overlayfs: idmapped layers are currently not supported
	[Oct18 18:21] overlayfs: idmapped layers are currently not supported
	[Oct18 18:22] overlayfs: idmapped layers are currently not supported
	[Oct18 18:23] overlayfs: idmapped layers are currently not supported
	[ +30.822562] overlayfs: idmapped layers are currently not supported
	[Oct18 18:24] bpfilter: read fail -512
	[ +10.607871] overlayfs: idmapped layers are currently not supported
	[Oct18 18:25] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a1079809002e498d77ff9d88878443404d6a5c177f4525387ab8927832918f9f] <==
	{"level":"warn","ts":"2025-10-18T18:24:47.899773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:24:47.926574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:24:47.954150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:24:47.979304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:24:48.013186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:24:48.073095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:24:48.095581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:24:48.241748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:24:50.155931Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.732157ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-18T18:24:50.156000Z","caller":"traceutil/trace.go:172","msg":"trace[149125449] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:0; response_revision:7; }","duration":"114.812093ms","start":"2025-10-18T18:24:50.041174Z","end":"2025-10-18T18:24:50.155986Z","steps":["trace[149125449] 'agreement among raft nodes before linearized reading'  (duration: 57.004802ms)","trace[149125449] 'range keys from in-memory index tree'  (duration: 57.714276ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T18:24:50.156426Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.413266ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-18T18:24:50.156463Z","caller":"traceutil/trace.go:172","msg":"trace[43372545] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:7; }","duration":"115.451536ms","start":"2025-10-18T18:24:50.041003Z","end":"2025-10-18T18:24:50.156454Z","steps":["trace[43372545] 'agreement among raft nodes before linearized reading'  (duration: 57.192063ms)","trace[43372545] 'range keys from in-memory index tree'  (duration: 58.203267ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T18:24:50.156956Z","caller":"traceutil/trace.go:172","msg":"trace[1064566521] transaction","detail":"{read_only:false; response_revision:8; number_of_response:1; }","duration":"126.007167ms","start":"2025-10-18T18:24:50.030917Z","end":"2025-10-18T18:24:50.156924Z","steps":["trace[1064566521] 'process raft request'  (duration: 67.249201ms)","trace[1064566521] 'compare'  (duration: 58.031811ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T18:24:50.157103Z","caller":"traceutil/trace.go:172","msg":"trace[1569146441] transaction","detail":"{read_only:false; response_revision:9; number_of_response:1; }","duration":"126.119414ms","start":"2025-10-18T18:24:50.030976Z","end":"2025-10-18T18:24:50.157096Z","steps":["trace[1569146441] 'process raft request'  (duration: 125.617049ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T18:24:50.157186Z","caller":"traceutil/trace.go:172","msg":"trace[727860363] transaction","detail":"{read_only:false; response_revision:10; number_of_response:1; }","duration":"118.761404ms","start":"2025-10-18T18:24:50.038419Z","end":"2025-10-18T18:24:50.157180Z","steps":["trace[727860363] 'process raft request'  (duration: 118.220572ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T18:24:50.157258Z","caller":"traceutil/trace.go:172","msg":"trace[541324494] transaction","detail":"{read_only:false; response_revision:11; number_of_response:1; }","duration":"118.772703ms","start":"2025-10-18T18:24:50.038480Z","end":"2025-10-18T18:24:50.157252Z","steps":["trace[541324494] 'process raft request'  (duration: 118.177306ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T18:24:50.157435Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.468775ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-729957\" limit:1 ","response":"range_response_count:1 size:3242"}
	{"level":"info","ts":"2025-10-18T18:24:50.157464Z","caller":"traceutil/trace.go:172","msg":"trace[790565856] range","detail":"{range_begin:/registry/minions/no-preload-729957; range_end:; response_count:1; response_revision:16; }","duration":"101.507315ms","start":"2025-10-18T18:24:50.055950Z","end":"2025-10-18T18:24:50.157458Z","steps":["trace[790565856] 'agreement among raft nodes before linearized reading'  (duration: 101.40726ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T18:24:50.157615Z","caller":"traceutil/trace.go:172","msg":"trace[1450497529] transaction","detail":"{read_only:false; response_revision:12; number_of_response:1; }","duration":"111.42622ms","start":"2025-10-18T18:24:50.046182Z","end":"2025-10-18T18:24:50.157609Z","steps":["trace[1450497529] 'process raft request'  (duration: 110.497511ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T18:24:50.158346Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.232099ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-729957\" limit:1 ","response":"range_response_count:1 size:3242"}
	{"level":"info","ts":"2025-10-18T18:24:50.160996Z","caller":"traceutil/trace.go:172","msg":"trace[1824911165] range","detail":"{range_begin:/registry/minions/no-preload-729957; range_end:; response_count:1; response_revision:16; }","duration":"104.873323ms","start":"2025-10-18T18:24:50.056103Z","end":"2025-10-18T18:24:50.160977Z","steps":["trace[1824911165] 'agreement among raft nodes before linearized reading'  (duration: 102.19804ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T18:24:50.162097Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.287474ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:350"}
	{"level":"info","ts":"2025-10-18T18:24:50.162139Z","caller":"traceutil/trace.go:172","msg":"trace[864729191] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:7; }","duration":"125.530738ms","start":"2025-10-18T18:24:50.036594Z","end":"2025-10-18T18:24:50.162124Z","steps":["trace[864729191] 'agreement among raft nodes before linearized reading'  (duration: 61.631275ms)","trace[864729191] 'range keys from in-memory index tree'  (duration: 57.628696ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T18:24:59.999218Z","caller":"traceutil/trace.go:172","msg":"trace[1254177842] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"101.47506ms","start":"2025-10-18T18:24:59.897723Z","end":"2025-10-18T18:24:59.999198Z","steps":["trace[1254177842] 'process raft request'  (duration: 94.209441ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T18:24:59.999767Z","caller":"traceutil/trace.go:172","msg":"trace[1803273144] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"101.819549ms","start":"2025-10-18T18:24:59.897937Z","end":"2025-10-18T18:24:59.999756Z","steps":["trace[1803273144] 'process raft request'  (duration: 95.236777ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:25:30 up  2:07,  0 user,  load average: 4.22, 3.36, 2.90
	Linux no-preload-729957 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [37870c8c606b6aaad23a00c671ef534d97f0bfc5edbe529ddd8b806a1973c039] <==
	I1018 18:25:05.506408       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 18:25:05.506815       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 18:25:05.506964       1 main.go:148] setting mtu 1500 for CNI 
	I1018 18:25:05.507003       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 18:25:05.507039       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T18:25:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 18:25:05.707332       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 18:25:05.707455       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 18:25:05.707512       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 18:25:05.711575       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 18:25:05.909064       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 18:25:05.909097       1 metrics.go:72] Registering metrics
	I1018 18:25:05.909155       1 controller.go:711] "Syncing nftables rules"
	I1018 18:25:15.713019       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 18:25:15.713076       1 main.go:301] handling current node
	I1018 18:25:25.709045       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 18:25:25.709111       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4a34313e955f7e5bbd7785193773f83b057998b99d7ba276b4aab548e99676a2] <==
	I1018 18:24:49.999740       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 18:24:49.999750       1 cache.go:39] Caches are synced for autoregister controller
	I1018 18:24:50.202727       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 18:24:50.211106       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 18:24:50.216481       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 18:24:50.323197       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 18:24:50.339404       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 18:24:50.369597       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 18:24:50.542388       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 18:24:50.543576       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 18:24:52.803514       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 18:24:52.903731       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 18:24:53.052681       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 18:24:53.068175       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1018 18:24:53.069432       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 18:24:53.080268       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 18:24:53.609745       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 18:24:54.486186       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 18:24:54.546607       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 18:24:54.637493       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 18:24:59.159974       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1018 18:24:59.373337       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 18:24:59.592310       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 18:24:59.739923       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1018 18:25:28.431391       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:38688: use of closed network connection
	
	
	==> kube-controller-manager [f8724ceac6cb8d743c10f391fca596e0dcce13cf5b3c5b5ea9dc2edf551ca655] <==
	I1018 18:24:58.609507       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 18:24:58.609514       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 18:24:58.609539       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 18:24:58.609552       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 18:24:58.610678       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 18:24:58.611280       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 18:24:58.611295       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 18:24:58.613432       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 18:24:58.614129       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 18:24:58.615690       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 18:24:58.619298       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 18:24:58.632509       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 18:24:58.632711       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 18:24:58.636440       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 18:24:58.639859       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 18:24:58.650989       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 18:24:58.654352       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 18:24:58.654440       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 18:24:58.654826       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 18:24:58.654875       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 18:24:58.654909       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 18:24:58.654918       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 18:24:58.654923       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 18:24:58.664882       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-729957" podCIDRs=["10.244.0.0/24"]
	I1018 18:25:18.566835       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2e05cb62df3f41456a3ebced2ab436cd5a9ae819712c41ef0ba3bdc8dbc9c181] <==
	I1018 18:25:00.818599       1 server_linux.go:53] "Using iptables proxy"
	I1018 18:25:01.006903       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 18:25:01.107120       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 18:25:01.107160       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 18:25:01.107232       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 18:25:01.354055       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 18:25:01.354109       1 server_linux.go:132] "Using iptables Proxier"
	I1018 18:25:01.429946       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 18:25:01.441984       1 server.go:527] "Version info" version="v1.34.1"
	I1018 18:25:01.442004       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:25:01.443619       1 config.go:200] "Starting service config controller"
	I1018 18:25:01.443630       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 18:25:01.443648       1 config.go:106] "Starting endpoint slice config controller"
	I1018 18:25:01.443652       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 18:25:01.443662       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 18:25:01.443667       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 18:25:01.444397       1 config.go:309] "Starting node config controller"
	I1018 18:25:01.444406       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 18:25:01.444411       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 18:25:01.543996       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 18:25:01.544040       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 18:25:01.544091       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5fbd9289768453e84082313a83a819ffdec2c79d11906e14c3c476d56cf36e7b] <==
	E1018 18:24:51.110928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 18:24:51.118035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 18:24:51.118341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 18:24:51.118355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 18:24:51.118423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 18:24:51.118461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 18:24:51.118525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 18:24:51.118598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 18:24:51.118601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 18:24:51.118692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 18:24:51.118752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 18:24:51.118762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 18:24:51.118813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 18:24:51.118860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 18:24:51.118948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 18:24:51.118986       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 18:24:51.948096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 18:24:51.995858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 18:24:52.056302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 18:24:52.075000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 18:24:52.170972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 18:24:52.212465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 18:24:52.241886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 18:24:52.270774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1018 18:24:54.599123       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 18:24:58 no-preload-729957 kubelet[1998]: I1018 18:24:58.729915    1998 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 18:24:58 no-preload-729957 kubelet[1998]: I1018 18:24:58.731094    1998 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 18:24:59 no-preload-729957 kubelet[1998]: I1018 18:24:59.300168    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64r42\" (UniqueName: \"kubernetes.io/projected/6c9fa05f-7c37-442d-b3fa-ee037c865d3e-kube-api-access-64r42\") pod \"kindnet-4hbt7\" (UID: \"6c9fa05f-7c37-442d-b3fa-ee037c865d3e\") " pod="kube-system/kindnet-4hbt7"
	Oct 18 18:24:59 no-preload-729957 kubelet[1998]: I1018 18:24:59.300866    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6c9fa05f-7c37-442d-b3fa-ee037c865d3e-cni-cfg\") pod \"kindnet-4hbt7\" (UID: \"6c9fa05f-7c37-442d-b3fa-ee037c865d3e\") " pod="kube-system/kindnet-4hbt7"
	Oct 18 18:24:59 no-preload-729957 kubelet[1998]: I1018 18:24:59.301138    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c9fa05f-7c37-442d-b3fa-ee037c865d3e-lib-modules\") pod \"kindnet-4hbt7\" (UID: \"6c9fa05f-7c37-442d-b3fa-ee037c865d3e\") " pod="kube-system/kindnet-4hbt7"
	Oct 18 18:24:59 no-preload-729957 kubelet[1998]: I1018 18:24:59.301175    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6f7e4f1-ccc0-40c5-b449-fb42e743f373-lib-modules\") pod \"kube-proxy-75znn\" (UID: \"c6f7e4f1-ccc0-40c5-b449-fb42e743f373\") " pod="kube-system/kube-proxy-75znn"
	Oct 18 18:24:59 no-preload-729957 kubelet[1998]: I1018 18:24:59.301295    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c9fa05f-7c37-442d-b3fa-ee037c865d3e-xtables-lock\") pod \"kindnet-4hbt7\" (UID: \"6c9fa05f-7c37-442d-b3fa-ee037c865d3e\") " pod="kube-system/kindnet-4hbt7"
	Oct 18 18:24:59 no-preload-729957 kubelet[1998]: I1018 18:24:59.301330    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6f7e4f1-ccc0-40c5-b449-fb42e743f373-xtables-lock\") pod \"kube-proxy-75znn\" (UID: \"c6f7e4f1-ccc0-40c5-b449-fb42e743f373\") " pod="kube-system/kube-proxy-75znn"
	Oct 18 18:24:59 no-preload-729957 kubelet[1998]: I1018 18:24:59.301456    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86p7v\" (UniqueName: \"kubernetes.io/projected/c6f7e4f1-ccc0-40c5-b449-fb42e743f373-kube-api-access-86p7v\") pod \"kube-proxy-75znn\" (UID: \"c6f7e4f1-ccc0-40c5-b449-fb42e743f373\") " pod="kube-system/kube-proxy-75znn"
	Oct 18 18:24:59 no-preload-729957 kubelet[1998]: I1018 18:24:59.301484    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c6f7e4f1-ccc0-40c5-b449-fb42e743f373-kube-proxy\") pod \"kube-proxy-75znn\" (UID: \"c6f7e4f1-ccc0-40c5-b449-fb42e743f373\") " pod="kube-system/kube-proxy-75znn"
	Oct 18 18:24:59 no-preload-729957 kubelet[1998]: I1018 18:24:59.461330    1998 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 18:24:59 no-preload-729957 kubelet[1998]: W1018 18:24:59.650683    1998 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673/crio-81e3e3ef99e9b4b98a10454ad902501838ea0ac59581a8b57fadf66d77d1b754 WatchSource:0}: Error finding container 81e3e3ef99e9b4b98a10454ad902501838ea0ac59581a8b57fadf66d77d1b754: Status 404 returned error can't find the container with id 81e3e3ef99e9b4b98a10454ad902501838ea0ac59581a8b57fadf66d77d1b754
	Oct 18 18:25:01 no-preload-729957 kubelet[1998]: I1018 18:25:01.134788    1998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-75znn" podStartSLOduration=2.134759465 podStartE2EDuration="2.134759465s" podCreationTimestamp="2025-10-18 18:24:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 18:25:01.134637256 +0000 UTC m=+6.727193767" watchObservedRunningTime="2025-10-18 18:25:01.134759465 +0000 UTC m=+6.727315975"
	Oct 18 18:25:16 no-preload-729957 kubelet[1998]: I1018 18:25:16.210141    1998 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 18:25:16 no-preload-729957 kubelet[1998]: I1018 18:25:16.249370    1998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4hbt7" podStartSLOduration=11.614789102 podStartE2EDuration="17.249352861s" podCreationTimestamp="2025-10-18 18:24:59 +0000 UTC" firstStartedPulling="2025-10-18 18:24:59.695323613 +0000 UTC m=+5.287880132" lastFinishedPulling="2025-10-18 18:25:05.32988738 +0000 UTC m=+10.922443891" observedRunningTime="2025-10-18 18:25:06.143487355 +0000 UTC m=+11.736043883" watchObservedRunningTime="2025-10-18 18:25:16.249352861 +0000 UTC m=+21.841909380"
	Oct 18 18:25:16 no-preload-729957 kubelet[1998]: I1018 18:25:16.362268    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4bef6a17-c67c-4394-837e-c20c6378a6ed-tmp\") pod \"storage-provisioner\" (UID: \"4bef6a17-c67c-4394-837e-c20c6378a6ed\") " pod="kube-system/storage-provisioner"
	Oct 18 18:25:16 no-preload-729957 kubelet[1998]: I1018 18:25:16.362328    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrp45\" (UniqueName: \"kubernetes.io/projected/365b51ac-c2aa-4247-a37e-ef5ce5d54a36-kube-api-access-mrp45\") pod \"coredns-66bc5c9577-q7mng\" (UID: \"365b51ac-c2aa-4247-a37e-ef5ce5d54a36\") " pod="kube-system/coredns-66bc5c9577-q7mng"
	Oct 18 18:25:16 no-preload-729957 kubelet[1998]: I1018 18:25:16.362351    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29x7c\" (UniqueName: \"kubernetes.io/projected/4bef6a17-c67c-4394-837e-c20c6378a6ed-kube-api-access-29x7c\") pod \"storage-provisioner\" (UID: \"4bef6a17-c67c-4394-837e-c20c6378a6ed\") " pod="kube-system/storage-provisioner"
	Oct 18 18:25:16 no-preload-729957 kubelet[1998]: I1018 18:25:16.362371    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/365b51ac-c2aa-4247-a37e-ef5ce5d54a36-config-volume\") pod \"coredns-66bc5c9577-q7mng\" (UID: \"365b51ac-c2aa-4247-a37e-ef5ce5d54a36\") " pod="kube-system/coredns-66bc5c9577-q7mng"
	Oct 18 18:25:16 no-preload-729957 kubelet[1998]: W1018 18:25:16.611275    1998 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673/crio-9d8f8ae17302610809babb33e37afff898279974ddbc3b5637c2a5b8d04f8f79 WatchSource:0}: Error finding container 9d8f8ae17302610809babb33e37afff898279974ddbc3b5637c2a5b8d04f8f79: Status 404 returned error can't find the container with id 9d8f8ae17302610809babb33e37afff898279974ddbc3b5637c2a5b8d04f8f79
	Oct 18 18:25:17 no-preload-729957 kubelet[1998]: I1018 18:25:17.186507    1998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.186488007 podStartE2EDuration="15.186488007s" podCreationTimestamp="2025-10-18 18:25:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 18:25:17.168805417 +0000 UTC m=+22.761361928" watchObservedRunningTime="2025-10-18 18:25:17.186488007 +0000 UTC m=+22.779044526"
	Oct 18 18:25:19 no-preload-729957 kubelet[1998]: I1018 18:25:19.279961    1998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-q7mng" podStartSLOduration=19.279939703 podStartE2EDuration="19.279939703s" podCreationTimestamp="2025-10-18 18:25:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 18:25:17.189062793 +0000 UTC m=+22.781619320" watchObservedRunningTime="2025-10-18 18:25:19.279939703 +0000 UTC m=+24.872496214"
	Oct 18 18:25:19 no-preload-729957 kubelet[1998]: I1018 18:25:19.385133    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrnkf\" (UniqueName: \"kubernetes.io/projected/e89c4c23-75f1-45fd-a06e-77828509a4b3-kube-api-access-mrnkf\") pod \"busybox\" (UID: \"e89c4c23-75f1-45fd-a06e-77828509a4b3\") " pod="default/busybox"
	Oct 18 18:25:19 no-preload-729957 kubelet[1998]: W1018 18:25:19.621502    1998 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673/crio-7ff39ca547c8d86e1f040d60cae4cfee4422e4459ed6946cb5937c12380c760d WatchSource:0}: Error finding container 7ff39ca547c8d86e1f040d60cae4cfee4422e4459ed6946cb5937c12380c760d: Status 404 returned error can't find the container with id 7ff39ca547c8d86e1f040d60cae4cfee4422e4459ed6946cb5937c12380c760d
	Oct 18 18:25:22 no-preload-729957 kubelet[1998]: I1018 18:25:22.178777    1998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.255097747 podStartE2EDuration="3.178756722s" podCreationTimestamp="2025-10-18 18:25:19 +0000 UTC" firstStartedPulling="2025-10-18 18:25:19.624111991 +0000 UTC m=+25.216668501" lastFinishedPulling="2025-10-18 18:25:21.547770965 +0000 UTC m=+27.140327476" observedRunningTime="2025-10-18 18:25:22.177964613 +0000 UTC m=+27.770521140" watchObservedRunningTime="2025-10-18 18:25:22.178756722 +0000 UTC m=+27.771313233"
	
	
	==> storage-provisioner [7bc3c130ba074d8f92baa3df0b852710d531388fb1e9ead66551074c3fc207ee] <==
	I1018 18:25:16.824845       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 18:25:16.880727       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 18:25:16.880909       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 18:25:16.883893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:25:16.893375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 18:25:16.893820       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 18:25:16.894077       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-729957_71425694-f547-41ab-8f6f-7637c032e5d8!
	I1018 18:25:16.899875       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"707a2d03-df04-488e-b561-b69c9acdb2d6", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-729957_71425694-f547-41ab-8f6f-7637c032e5d8 became leader
	W1018 18:25:16.903878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:25:16.917400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 18:25:16.995817       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-729957_71425694-f547-41ab-8f6f-7637c032e5d8!
	W1018 18:25:18.920749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:25:18.927883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:25:20.931003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:25:20.935377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:25:22.938565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:25:22.945896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:25:24.949500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:25:24.954472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:25:26.958882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:25:26.964764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:25:28.967959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:25:28.977176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-729957 -n no-preload-729957
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-729957 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-530891 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-530891 --alsologtostderr -v=1: exit status 80 (2.194809867s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-530891 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 18:25:47.241878  222078 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:25:47.242055  222078 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:25:47.242084  222078 out.go:374] Setting ErrFile to fd 2...
	I1018 18:25:47.242105  222078 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:25:47.242370  222078 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:25:47.242638  222078 out.go:368] Setting JSON to false
	I1018 18:25:47.242687  222078 mustload.go:65] Loading cluster: newest-cni-530891
	I1018 18:25:47.243097  222078 config.go:182] Loaded profile config "newest-cni-530891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:25:47.243623  222078 cli_runner.go:164] Run: docker container inspect newest-cni-530891 --format={{.State.Status}}
	I1018 18:25:47.261484  222078 host.go:66] Checking if "newest-cni-530891" exists ...
	I1018 18:25:47.261809  222078 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:25:47.324568  222078 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-18 18:25:47.313333493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:25:47.325266  222078 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-530891 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 18:25:47.328871  222078 out.go:179] * Pausing node newest-cni-530891 ... 
	I1018 18:25:47.331857  222078 host.go:66] Checking if "newest-cni-530891" exists ...
	I1018 18:25:47.332259  222078 ssh_runner.go:195] Run: systemctl --version
	I1018 18:25:47.332307  222078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-530891
	I1018 18:25:47.349680  222078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/newest-cni-530891/id_rsa Username:docker}
	I1018 18:25:47.455674  222078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:25:47.467948  222078 pause.go:52] kubelet running: true
	I1018 18:25:47.468037  222078 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 18:25:47.696832  222078 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 18:25:47.696991  222078 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 18:25:47.793304  222078 cri.go:89] found id: "7745e67dd4bf1d9271d40d2bd20bea1d3022540aef104f3104f75cb3302bdabe"
	I1018 18:25:47.793324  222078 cri.go:89] found id: "f997a48b38311515b3f32540bec213d00420efe5b85e8623f1d6b06b790cb4d5"
	I1018 18:25:47.793328  222078 cri.go:89] found id: "06cd45c57fb1f9231e7e055195a58f25283c5dfad82d2b594c49d2f914affbb1"
	I1018 18:25:47.793332  222078 cri.go:89] found id: "db7e052bfb458a74f9e888e8ffa588ed16db1f7552c47499d1b30765c74fcce9"
	I1018 18:25:47.793335  222078 cri.go:89] found id: "b8704a5aac61ca586b4c30874de5bf81d5cdaae0f243e3ef02e446567cf0f0de"
	I1018 18:25:47.793339  222078 cri.go:89] found id: "e3bfcde1f4a1727da6087f51f60474aa425e63692284f02e03415c5e14f663ce"
	I1018 18:25:47.793342  222078 cri.go:89] found id: ""
	I1018 18:25:47.793394  222078 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 18:25:47.809097  222078 retry.go:31] will retry after 286.619505ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:25:47Z" level=error msg="open /run/runc: no such file or directory"
	I1018 18:25:48.096655  222078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:25:48.110720  222078 pause.go:52] kubelet running: false
	I1018 18:25:48.110789  222078 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 18:25:48.313426  222078 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 18:25:48.313505  222078 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 18:25:48.462199  222078 cri.go:89] found id: "7745e67dd4bf1d9271d40d2bd20bea1d3022540aef104f3104f75cb3302bdabe"
	I1018 18:25:48.462218  222078 cri.go:89] found id: "f997a48b38311515b3f32540bec213d00420efe5b85e8623f1d6b06b790cb4d5"
	I1018 18:25:48.462223  222078 cri.go:89] found id: "06cd45c57fb1f9231e7e055195a58f25283c5dfad82d2b594c49d2f914affbb1"
	I1018 18:25:48.462227  222078 cri.go:89] found id: "db7e052bfb458a74f9e888e8ffa588ed16db1f7552c47499d1b30765c74fcce9"
	I1018 18:25:48.462244  222078 cri.go:89] found id: "b8704a5aac61ca586b4c30874de5bf81d5cdaae0f243e3ef02e446567cf0f0de"
	I1018 18:25:48.462248  222078 cri.go:89] found id: "e3bfcde1f4a1727da6087f51f60474aa425e63692284f02e03415c5e14f663ce"
	I1018 18:25:48.462251  222078 cri.go:89] found id: ""
	I1018 18:25:48.462306  222078 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 18:25:48.481106  222078 retry.go:31] will retry after 545.830774ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:25:48Z" level=error msg="open /run/runc: no such file or directory"
	I1018 18:25:49.027899  222078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:25:49.043753  222078 pause.go:52] kubelet running: false
	I1018 18:25:49.043827  222078 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 18:25:49.236475  222078 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 18:25:49.236557  222078 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 18:25:49.335420  222078 cri.go:89] found id: "7745e67dd4bf1d9271d40d2bd20bea1d3022540aef104f3104f75cb3302bdabe"
	I1018 18:25:49.335444  222078 cri.go:89] found id: "f997a48b38311515b3f32540bec213d00420efe5b85e8623f1d6b06b790cb4d5"
	I1018 18:25:49.335449  222078 cri.go:89] found id: "06cd45c57fb1f9231e7e055195a58f25283c5dfad82d2b594c49d2f914affbb1"
	I1018 18:25:49.335454  222078 cri.go:89] found id: "db7e052bfb458a74f9e888e8ffa588ed16db1f7552c47499d1b30765c74fcce9"
	I1018 18:25:49.335457  222078 cri.go:89] found id: "b8704a5aac61ca586b4c30874de5bf81d5cdaae0f243e3ef02e446567cf0f0de"
	I1018 18:25:49.335465  222078 cri.go:89] found id: "e3bfcde1f4a1727da6087f51f60474aa425e63692284f02e03415c5e14f663ce"
	I1018 18:25:49.335469  222078 cri.go:89] found id: ""
	I1018 18:25:49.335516  222078 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 18:25:49.361694  222078 out.go:203] 
	W1018 18:25:49.364473  222078 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:25:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:25:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 18:25:49.364495  222078 out.go:285] * 
	* 
	W1018 18:25:49.370649  222078 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 18:25:49.373670  222078 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-530891 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-530891
helpers_test.go:243: (dbg) docker inspect newest-cni-530891:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e",
	        "Created": "2025-10-18T18:24:51.961915069Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 219520,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T18:25:31.106798134Z",
	            "FinishedAt": "2025-10-18T18:25:29.990210321Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e/hostname",
	        "HostsPath": "/var/lib/docker/containers/592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e/hosts",
	        "LogPath": "/var/lib/docker/containers/592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e/592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e-json.log",
	        "Name": "/newest-cni-530891",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-530891:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-530891",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e",
	                "LowerDir": "/var/lib/docker/overlay2/6123219178d0089d945290d8d54993696ba0db05146a36d826c912c6a71dea18-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6123219178d0089d945290d8d54993696ba0db05146a36d826c912c6a71dea18/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6123219178d0089d945290d8d54993696ba0db05146a36d826c912c6a71dea18/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6123219178d0089d945290d8d54993696ba0db05146a36d826c912c6a71dea18/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-530891",
	                "Source": "/var/lib/docker/volumes/newest-cni-530891/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-530891",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-530891",
	                "name.minikube.sigs.k8s.io": "newest-cni-530891",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "15db1831ec8a0c62ce080c0e2ce16a92530b87bb8835e505562d09c8ef6a6dae",
	            "SandboxKey": "/var/run/docker/netns/15db1831ec8a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-530891": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:25:b2:fe:a2:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "430d25fc02daf5d96f95a7e706911a3c6ed05a1ed551d0fc6d07a2b7559606cd",
	                    "EndpointID": "0d42b0b0458d252cbef047f36c49997e25d582b0093042a23b94eca0223fb645",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-530891",
	                        "592c46465c1a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-530891 -n newest-cni-530891
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-530891 -n newest-cni-530891: exit status 2 (406.730658ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-530891 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-530891 logs -n 25: (1.335368899s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-213943 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │                     │
	│ stop    │ -p embed-certs-213943 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │ 18 Oct 25 18:23 UTC │
	│ addons  │ enable dashboard -p embed-certs-213943 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │ 18 Oct 25 18:23 UTC │
	│ start   │ -p embed-certs-213943 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │ 18 Oct 25 18:24 UTC │
	│ image   │ default-k8s-diff-port-192562 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ pause   │ -p default-k8s-diff-port-192562 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-192562                                                                                                                                                                                                               │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ delete  │ -p default-k8s-diff-port-192562                                                                                                                                                                                                               │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ delete  │ -p disable-driver-mounts-747178                                                                                                                                                                                                               │ disable-driver-mounts-747178 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ start   │ -p no-preload-729957 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:25 UTC │
	│ image   │ embed-certs-213943 image list --format=json                                                                                                                                                                                                   │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ pause   │ -p embed-certs-213943 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │                     │
	│ delete  │ -p embed-certs-213943                                                                                                                                                                                                                         │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ delete  │ -p embed-certs-213943                                                                                                                                                                                                                         │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ start   │ -p newest-cni-530891 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:25 UTC │
	│ addons  │ enable metrics-server -p newest-cni-530891 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-729957 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │                     │
	│ stop    │ -p newest-cni-530891 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ addons  │ enable dashboard -p newest-cni-530891 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ start   │ -p newest-cni-530891 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ stop    │ -p no-preload-729957 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ addons  │ enable dashboard -p no-preload-729957 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ start   │ -p no-preload-729957 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │                     │
	│ image   │ newest-cni-530891 image list --format=json                                                                                                                                                                                                    │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ pause   │ -p newest-cni-530891 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 18:25:44
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 18:25:44.243603  221240 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:25:44.244215  221240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:25:44.244248  221240 out.go:374] Setting ErrFile to fd 2...
	I1018 18:25:44.244268  221240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:25:44.244564  221240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:25:44.245023  221240 out.go:368] Setting JSON to false
	I1018 18:25:44.245958  221240 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7694,"bootTime":1760804251,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 18:25:44.246049  221240 start.go:141] virtualization:  
	I1018 18:25:44.249674  221240 out.go:179] * [no-preload-729957] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 18:25:44.252771  221240 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 18:25:44.252842  221240 notify.go:220] Checking for updates...
	I1018 18:25:44.258668  221240 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 18:25:44.261705  221240 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:25:44.264694  221240 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 18:25:44.267873  221240 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 18:25:44.270758  221240 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 18:25:44.274150  221240 config.go:182] Loaded profile config "no-preload-729957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:25:44.274690  221240 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 18:25:44.328645  221240 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 18:25:44.328761  221240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:25:44.441712  221240 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 18:25:44.428626954 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:25:44.441812  221240 docker.go:318] overlay module found
	I1018 18:25:44.444903  221240 out.go:179] * Using the docker driver based on existing profile
	I1018 18:25:44.447726  221240 start.go:305] selected driver: docker
	I1018 18:25:44.447752  221240 start.go:925] validating driver "docker" against &{Name:no-preload-729957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-729957 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:25:44.447862  221240 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 18:25:44.448563  221240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:25:44.564739  221240 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 18:25:44.554399962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:25:44.565158  221240 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:25:44.565190  221240 cni.go:84] Creating CNI manager for ""
	I1018 18:25:44.565243  221240 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:25:44.565276  221240 start.go:349] cluster config:
	{Name:no-preload-729957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-729957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:25:44.568532  221240 out.go:179] * Starting "no-preload-729957" primary control-plane node in "no-preload-729957" cluster
	I1018 18:25:44.571441  221240 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 18:25:44.574404  221240 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 18:25:44.577298  221240 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:25:44.577439  221240 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/config.json ...
	I1018 18:25:44.577784  221240 cache.go:107] acquiring lock: {Name:mkfe0c95c3696c6ee6d6bee7d1ad713b9bd021b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:25:44.577860  221240 cache.go:115] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 18:25:44.577867  221240 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 97.683µs
	I1018 18:25:44.577876  221240 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 18:25:44.577887  221240 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 18:25:44.578124  221240 cache.go:107] acquiring lock: {Name:mk2fda38822643b1c863eb02b4b58b1c8beea2d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:25:44.578187  221240 cache.go:115] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 18:25:44.578194  221240 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 76.604µs
	I1018 18:25:44.578201  221240 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 18:25:44.578212  221240 cache.go:107] acquiring lock: {Name:mkd26b3798aaf66fcad945e0c1a60f0824366e40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:25:44.578255  221240 cache.go:115] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 18:25:44.578261  221240 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 51.324µs
	I1018 18:25:44.578268  221240 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 18:25:44.578277  221240 cache.go:107] acquiring lock: {Name:mkd3282648be7d83ac0e67296042440acb53052b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:25:44.578304  221240 cache.go:115] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 18:25:44.578309  221240 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 33.231µs
	I1018 18:25:44.578315  221240 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 18:25:44.578323  221240 cache.go:107] acquiring lock: {Name:mk6a37c53550d30a6c5a6027e63e35937896f954 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:25:44.578349  221240 cache.go:115] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 18:25:44.578354  221240 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 31.5µs
	I1018 18:25:44.578360  221240 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 18:25:44.578370  221240 cache.go:107] acquiring lock: {Name:mka02bf3e7fa031efb5dd0162aedd881c5c29af2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:25:44.578394  221240 cache.go:115] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 18:25:44.578399  221240 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 30.359µs
	I1018 18:25:44.578405  221240 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 18:25:44.578413  221240 cache.go:107] acquiring lock: {Name:mk3a776414901f1896d41bf7105926b8db2f104a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:25:44.578437  221240 cache.go:115] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1018 18:25:44.578443  221240 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 30.417µs
	I1018 18:25:44.578448  221240 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 18:25:44.578458  221240 cache.go:107] acquiring lock: {Name:mke59697c6719748ff18c4e99b2595c9da08adaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:25:44.578485  221240 cache.go:115] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 18:25:44.578489  221240 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 32.534µs
	I1018 18:25:44.578495  221240 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 18:25:44.578501  221240 cache.go:87] Successfully saved all images to host disk.
	I1018 18:25:44.606298  221240 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 18:25:44.606317  221240 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 18:25:44.606330  221240 cache.go:232] Successfully downloaded all kic artifacts
	I1018 18:25:44.606353  221240 start.go:360] acquireMachinesLock for no-preload-729957: {Name:mke750361707948cde27a747cd8852fabeab5692 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:25:44.606402  221240 start.go:364] duration metric: took 35.357µs to acquireMachinesLock for "no-preload-729957"
	I1018 18:25:44.606420  221240 start.go:96] Skipping create...Using existing machine configuration
	I1018 18:25:44.606425  221240 fix.go:54] fixHost starting: 
	I1018 18:25:44.606684  221240 cli_runner.go:164] Run: docker container inspect no-preload-729957 --format={{.State.Status}}
	I1018 18:25:44.633508  221240 fix.go:112] recreateIfNeeded on no-preload-729957: state=Stopped err=<nil>
	W1018 18:25:44.633537  221240 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 18:25:46.366633  219338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.430814406s)
	I1018 18:25:46.366718  219338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.406395859s)
	I1018 18:25:46.367034  219338 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (7.389734566s)
	I1018 18:25:46.367059  219338 api_server.go:72] duration metric: took 7.788351789s to wait for apiserver process to appear ...
	I1018 18:25:46.367069  219338 api_server.go:88] waiting for apiserver healthz status ...
	I1018 18:25:46.367082  219338 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 18:25:46.367361  219338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.958665898s)
	I1018 18:25:46.370294  219338 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-530891 addons enable metrics-server
	
	I1018 18:25:46.386317  219338 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 18:25:46.387364  219338 api_server.go:141] control plane version: v1.34.1
	I1018 18:25:46.387390  219338 api_server.go:131] duration metric: took 20.3146ms to wait for apiserver health ...
	I1018 18:25:46.387400  219338 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 18:25:46.394955  219338 system_pods.go:59] 8 kube-system pods found
	I1018 18:25:46.394993  219338 system_pods.go:61] "coredns-66bc5c9577-brzb4" [762df58f-b70f-479e-b130-07c24a8f3f51] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 18:25:46.395003  219338 system_pods.go:61] "etcd-newest-cni-530891" [1cf783e9-928f-47f5-be9d-4df2479e9b31] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 18:25:46.395009  219338 system_pods.go:61] "kindnet-497z4" [99e6305c-fb9e-4f10-9746-3dfdd03c570a] Running
	I1018 18:25:46.395026  219338 system_pods.go:61] "kube-apiserver-newest-cni-530891" [b43d0e4b-98c3-4c5e-96dc-4ab8c7913e63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 18:25:46.395033  219338 system_pods.go:61] "kube-controller-manager-newest-cni-530891" [0d687e21-ef2f-4a67-94ea-d40750239b57] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 18:25:46.395038  219338 system_pods.go:61] "kube-proxy-k8ljb" [2f4233c2-bc5d-452a-84e3-875564801a54] Running
	I1018 18:25:46.395045  219338 system_pods.go:61] "kube-scheduler-newest-cni-530891" [a81c1ce2-edc2-4f88-aebd-d06916133c38] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 18:25:46.395052  219338 system_pods.go:61] "storage-provisioner" [b2348e9f-6e43-4f09-a0c0-01ab697d968a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 18:25:46.395058  219338 system_pods.go:74] duration metric: took 7.644464ms to wait for pod list to return data ...
	I1018 18:25:46.395072  219338 default_sa.go:34] waiting for default service account to be created ...
	I1018 18:25:46.398654  219338 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1018 18:25:46.400005  219338 default_sa.go:45] found service account: "default"
	I1018 18:25:46.400025  219338 default_sa.go:55] duration metric: took 4.947313ms for default service account to be created ...
	I1018 18:25:46.400040  219338 kubeadm.go:586] duration metric: took 7.821329691s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 18:25:46.400057  219338 node_conditions.go:102] verifying NodePressure condition ...
	I1018 18:25:46.401475  219338 addons.go:514] duration metric: took 7.822395461s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 18:25:46.414466  219338 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 18:25:46.414508  219338 node_conditions.go:123] node cpu capacity is 2
	I1018 18:25:46.414521  219338 node_conditions.go:105] duration metric: took 14.458861ms to run NodePressure ...
	I1018 18:25:46.414533  219338 start.go:241] waiting for startup goroutines ...
	I1018 18:25:46.414540  219338 start.go:246] waiting for cluster config update ...
	I1018 18:25:46.414551  219338 start.go:255] writing updated cluster config ...
	I1018 18:25:46.414882  219338 ssh_runner.go:195] Run: rm -f paused
	I1018 18:25:46.501941  219338 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 18:25:46.505066  219338 out.go:179] * Done! kubectl is now configured to use "newest-cni-530891" cluster and "default" namespace by default
	I1018 18:25:44.636630  221240 out.go:252] * Restarting existing docker container for "no-preload-729957" ...
	I1018 18:25:44.636735  221240 cli_runner.go:164] Run: docker start no-preload-729957
	I1018 18:25:45.037168  221240 cli_runner.go:164] Run: docker container inspect no-preload-729957 --format={{.State.Status}}
	I1018 18:25:45.068006  221240 kic.go:430] container "no-preload-729957" state is running.
	I1018 18:25:45.068429  221240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-729957
	I1018 18:25:45.101308  221240 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/config.json ...
	I1018 18:25:45.101578  221240 machine.go:93] provisionDockerMachine start ...
	I1018 18:25:45.101645  221240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:25:45.150841  221240 main.go:141] libmachine: Using SSH client type: native
	I1018 18:25:45.151175  221240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1018 18:25:45.151186  221240 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 18:25:45.155174  221240 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51454->127.0.0.1:33088: read: connection reset by peer
	I1018 18:25:48.349227  221240 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-729957
	
	I1018 18:25:48.349295  221240 ubuntu.go:182] provisioning hostname "no-preload-729957"
	I1018 18:25:48.349374  221240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:25:48.371544  221240 main.go:141] libmachine: Using SSH client type: native
	I1018 18:25:48.371850  221240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1018 18:25:48.371866  221240 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-729957 && echo "no-preload-729957" | sudo tee /etc/hostname
	I1018 18:25:48.538716  221240 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-729957
	
	I1018 18:25:48.538816  221240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:25:48.558087  221240 main.go:141] libmachine: Using SSH client type: native
	I1018 18:25:48.558459  221240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1018 18:25:48.558484  221240 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-729957' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-729957/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-729957' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 18:25:48.705455  221240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 18:25:48.705529  221240 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 18:25:48.705563  221240 ubuntu.go:190] setting up certificates
	I1018 18:25:48.705598  221240 provision.go:84] configureAuth start
	I1018 18:25:48.705694  221240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-729957
	I1018 18:25:48.723622  221240 provision.go:143] copyHostCerts
	I1018 18:25:48.723750  221240 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 18:25:48.723810  221240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 18:25:48.723931  221240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 18:25:48.724045  221240 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 18:25:48.724052  221240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 18:25:48.724079  221240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 18:25:48.724138  221240 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 18:25:48.724143  221240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 18:25:48.724165  221240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 18:25:48.724225  221240 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.no-preload-729957 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-729957]
	I1018 18:25:49.135641  221240 provision.go:177] copyRemoteCerts
	I1018 18:25:49.135759  221240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 18:25:49.135823  221240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:25:49.162993  221240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa Username:docker}
	
	
	==> CRI-O <==
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.15265575Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.188687911Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a0d3fe6d-bb31-4f7f-b47f-3543b15bafd3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.189532557Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-k8ljb/POD" id=6ec66d9e-4cef-4a32-b97f-c8ca8b4a00ac name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.189839441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.256337758Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6ec66d9e-4cef-4a32-b97f-c8ca8b4a00ac name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.273060705Z" level=info msg="Ran pod sandbox e650f9b6f354349ddf1c0de73f3dffa9634229520070d7ad90209cc8a1e4e121 with infra container: kube-system/kindnet-497z4/POD" id=a0d3fe6d-bb31-4f7f-b47f-3543b15bafd3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.289398606Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=300bbb88-6bc8-4cea-8e36-6c22afee0d7e name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.294716877Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d09cf8ed-f957-44c5-a937-aae03e354f7c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.29661163Z" level=info msg="Creating container: kube-system/kindnet-497z4/kindnet-cni" id=9b529e55-a47a-44df-ac48-b3d9471a41d4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.297315672Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.373166773Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.38408368Z" level=info msg="Ran pod sandbox 7702d7866c44b245fb5125de0b275f00d916abb79da175e3c60b294e481e0588 with infra container: kube-system/kube-proxy-k8ljb/POD" id=6ec66d9e-4cef-4a32-b97f-c8ca8b4a00ac name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.384480149Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.399464358Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=4e488749-7065-44cf-92fa-45cd42d533f7 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.403736545Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=21fd1537-4b00-4523-9802-ec50676b2fa3 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.409983411Z" level=info msg="Creating container: kube-system/kube-proxy-k8ljb/kube-proxy" id=cf5ea5f6-0891-46d9-9f90-222b04462ac6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.411780324Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.451351787Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.473321046Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.478475074Z" level=info msg="Created container f997a48b38311515b3f32540bec213d00420efe5b85e8623f1d6b06b790cb4d5: kube-system/kindnet-497z4/kindnet-cni" id=9b529e55-a47a-44df-ac48-b3d9471a41d4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.487793832Z" level=info msg="Starting container: f997a48b38311515b3f32540bec213d00420efe5b85e8623f1d6b06b790cb4d5" id=ec147c13-d68e-47cc-89e5-90bc1a26c99c name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.501293829Z" level=info msg="Started container" PID=1059 containerID=f997a48b38311515b3f32540bec213d00420efe5b85e8623f1d6b06b790cb4d5 description=kube-system/kindnet-497z4/kindnet-cni id=ec147c13-d68e-47cc-89e5-90bc1a26c99c name=/runtime.v1.RuntimeService/StartContainer sandboxID=e650f9b6f354349ddf1c0de73f3dffa9634229520070d7ad90209cc8a1e4e121
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.597321173Z" level=info msg="Created container 7745e67dd4bf1d9271d40d2bd20bea1d3022540aef104f3104f75cb3302bdabe: kube-system/kube-proxy-k8ljb/kube-proxy" id=cf5ea5f6-0891-46d9-9f90-222b04462ac6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.616634817Z" level=info msg="Starting container: 7745e67dd4bf1d9271d40d2bd20bea1d3022540aef104f3104f75cb3302bdabe" id=882ee054-6fb1-4b32-8cd9-734f965eb101 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.631763028Z" level=info msg="Started container" PID=1068 containerID=7745e67dd4bf1d9271d40d2bd20bea1d3022540aef104f3104f75cb3302bdabe description=kube-system/kube-proxy-k8ljb/kube-proxy id=882ee054-6fb1-4b32-8cd9-734f965eb101 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7702d7866c44b245fb5125de0b275f00d916abb79da175e3c60b294e481e0588
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	7745e67dd4bf1       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                1                   7702d7866c44b       kube-proxy-k8ljb                            kube-system
	f997a48b38311       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 seconds ago       Running             kindnet-cni               1                   e650f9b6f3543       kindnet-497z4                               kube-system
	06cd45c57fb1f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   12 seconds ago      Running             kube-apiserver            1                   1ace07e34915a       kube-apiserver-newest-cni-530891            kube-system
	db7e052bfb458       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   12 seconds ago      Running             kube-scheduler            1                   6722caf17c9df       kube-scheduler-newest-cni-530891            kube-system
	b8704a5aac61c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   12 seconds ago      Running             etcd                      1                   f33a61f5fd445       etcd-newest-cni-530891                      kube-system
	e3bfcde1f4a17       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   12 seconds ago      Running             kube-controller-manager   1                   3c3452a0378a2       kube-controller-manager-newest-cni-530891   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-530891
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-530891
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=newest-cni-530891
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T18_25_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 18:25:15 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-530891
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 18:25:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 18:25:44 +0000   Sat, 18 Oct 2025 18:25:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 18:25:44 +0000   Sat, 18 Oct 2025 18:25:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 18:25:44 +0000   Sat, 18 Oct 2025 18:25:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 18:25:44 +0000   Sat, 18 Oct 2025 18:25:11 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-530891
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                4a5cb23d-033c-4f7d-ae76-6a54d50540e5
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-530891                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-497z4                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-newest-cni-530891             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-530891    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-k8ljb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-newest-cni-530891             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 23s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  39s (x8 over 39s)  kubelet          Node newest-cni-530891 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    39s (x8 over 39s)  kubelet          Node newest-cni-530891 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     39s (x8 over 39s)  kubelet          Node newest-cni-530891 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    32s                kubelet          Node newest-cni-530891 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 32s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  32s                kubelet          Node newest-cni-530891 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     32s                kubelet          Node newest-cni-530891 status is now: NodeHasSufficientPID
	  Normal   Starting                 32s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           27s                node-controller  Node newest-cni-530891 event: Registered Node newest-cni-530891 in Controller
	  Normal   Starting                 13s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12s (x8 over 13s)  kubelet          Node newest-cni-530891 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12s (x8 over 13s)  kubelet          Node newest-cni-530891 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12s (x8 over 13s)  kubelet          Node newest-cni-530891 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-530891 event: Registered Node newest-cni-530891 in Controller
	
	
	==> dmesg <==
	[  +6.162774] overlayfs: idmapped layers are currently not supported
	[Oct18 18:05] overlayfs: idmapped layers are currently not supported
	[ +25.128760] overlayfs: idmapped layers are currently not supported
	[Oct18 18:06] overlayfs: idmapped layers are currently not supported
	[Oct18 18:07] overlayfs: idmapped layers are currently not supported
	[Oct18 18:08] overlayfs: idmapped layers are currently not supported
	[Oct18 18:09] overlayfs: idmapped layers are currently not supported
	[Oct18 18:11] overlayfs: idmapped layers are currently not supported
	[Oct18 18:13] overlayfs: idmapped layers are currently not supported
	[ +30.969240] overlayfs: idmapped layers are currently not supported
	[Oct18 18:15] overlayfs: idmapped layers are currently not supported
	[Oct18 18:16] overlayfs: idmapped layers are currently not supported
	[Oct18 18:17] overlayfs: idmapped layers are currently not supported
	[ +23.167826] overlayfs: idmapped layers are currently not supported
	[Oct18 18:18] overlayfs: idmapped layers are currently not supported
	[ +38.509809] overlayfs: idmapped layers are currently not supported
	[Oct18 18:19] overlayfs: idmapped layers are currently not supported
	[Oct18 18:21] overlayfs: idmapped layers are currently not supported
	[Oct18 18:22] overlayfs: idmapped layers are currently not supported
	[Oct18 18:23] overlayfs: idmapped layers are currently not supported
	[ +30.822562] overlayfs: idmapped layers are currently not supported
	[Oct18 18:24] bpfilter: read fail -512
	[ +10.607871] overlayfs: idmapped layers are currently not supported
	[Oct18 18:25] overlayfs: idmapped layers are currently not supported
	[ +26.762544] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b8704a5aac61ca586b4c30874de5bf81d5cdaae0f243e3ef02e446567cf0f0de] <==
	{"level":"warn","ts":"2025-10-18T18:25:41.978593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:41.989996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.007526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.035776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.054849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.075784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.114576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.143377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.155357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.181006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.201858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.214346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.235304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.252622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.276204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.287903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.306320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.330632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.348901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.364352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.382880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.416499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.474540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.491950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.548203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39150","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:25:50 up  2:08,  0 user,  load average: 3.94, 3.35, 2.91
	Linux newest-cni-530891 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f997a48b38311515b3f32540bec213d00420efe5b85e8623f1d6b06b790cb4d5] <==
	I1018 18:25:45.692090       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 18:25:45.692298       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 18:25:45.692395       1 main.go:148] setting mtu 1500 for CNI 
	I1018 18:25:45.692407       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 18:25:45.692418       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T18:25:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 18:25:45.908970       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 18:25:45.908988       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 18:25:45.909005       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 18:25:45.909949       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [06cd45c57fb1f9231e7e055195a58f25283c5dfad82d2b594c49d2f914affbb1] <==
	I1018 18:25:43.826792       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 18:25:43.836044       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 18:25:43.836096       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 18:25:43.836318       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 18:25:43.836395       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 18:25:43.836440       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 18:25:43.863061       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 18:25:43.869948       1 aggregator.go:171] initial CRD sync complete...
	I1018 18:25:43.869983       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 18:25:43.869991       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 18:25:43.869999       1 cache.go:39] Caches are synced for autoregister controller
	I1018 18:25:43.920909       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 18:25:43.940552       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1018 18:25:44.002863       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 18:25:44.314600       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 18:25:44.881608       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 18:25:45.794481       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 18:25:46.085815       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 18:25:46.183617       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 18:25:46.204761       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 18:25:46.308600       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.61.220"}
	I1018 18:25:46.325147       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.10.118"}
	I1018 18:25:47.983187       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 18:25:48.210602       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 18:25:48.331721       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [e3bfcde1f4a1727da6087f51f60474aa425e63692284f02e03415c5e14f663ce] <==
	I1018 18:25:47.808003       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 18:25:47.813199       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 18:25:47.821465       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 18:25:47.825212       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 18:25:47.825221       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 18:25:47.825239       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 18:25:47.825706       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 18:25:47.827464       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 18:25:47.830731       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 18:25:47.830840       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 18:25:47.834060       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 18:25:47.834087       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 18:25:47.834094       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 18:25:47.837108       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 18:25:47.837194       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 18:25:47.837222       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 18:25:47.837233       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 18:25:47.837239       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 18:25:47.840057       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 18:25:47.842204       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 18:25:47.845535       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 18:25:47.849839       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-530891"
	I1018 18:25:47.850747       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 18:25:47.862213       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 18:25:47.881643       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [7745e67dd4bf1d9271d40d2bd20bea1d3022540aef104f3104f75cb3302bdabe] <==
	I1018 18:25:46.205369       1 server_linux.go:53] "Using iptables proxy"
	I1018 18:25:46.398235       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 18:25:46.498996       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 18:25:46.499082       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 18:25:46.499209       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 18:25:46.524419       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 18:25:46.524529       1 server_linux.go:132] "Using iptables Proxier"
	I1018 18:25:46.595115       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 18:25:46.595474       1 server.go:527] "Version info" version="v1.34.1"
	I1018 18:25:46.595490       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:25:46.597327       1 config.go:200] "Starting service config controller"
	I1018 18:25:46.597409       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 18:25:46.597453       1 config.go:106] "Starting endpoint slice config controller"
	I1018 18:25:46.597497       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 18:25:46.597535       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 18:25:46.597568       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 18:25:46.598196       1 config.go:309] "Starting node config controller"
	I1018 18:25:46.600902       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 18:25:46.601108       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 18:25:46.698084       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 18:25:46.698125       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 18:25:46.698167       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [db7e052bfb458a74f9e888e8ffa588ed16db1f7552c47499d1b30765c74fcce9] <==
	I1018 18:25:41.180464       1 serving.go:386] Generated self-signed cert in-memory
	I1018 18:25:45.550044       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 18:25:45.550073       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:25:45.571755       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 18:25:45.571841       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 18:25:45.571861       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 18:25:45.571908       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 18:25:45.591004       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:25:45.591025       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:25:45.591045       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:25:45.591051       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:25:45.674105       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 18:25:45.692870       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:25:45.692956       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 18:25:43 newest-cni-530891 kubelet[729]: I1018 18:25:43.946648     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-530891"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.003072     729 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-530891"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.003235     729 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-530891"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.003313     729 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.007100     729 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: E1018 18:25:44.020734     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-530891\" already exists" pod="kube-system/etcd-newest-cni-530891"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.020784     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-530891"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: E1018 18:25:44.101744     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-530891\" already exists" pod="kube-system/kube-apiserver-newest-cni-530891"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.101776     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-530891"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: E1018 18:25:44.123136     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-530891\" already exists" pod="kube-system/kube-controller-manager-newest-cni-530891"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.123170     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-530891"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: E1018 18:25:44.166038     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-530891\" already exists" pod="kube-system/kube-scheduler-newest-cni-530891"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.821727     729 apiserver.go:52] "Watching apiserver"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.849856     729 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.862116     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/99e6305c-fb9e-4f10-9746-3dfdd03c570a-cni-cfg\") pod \"kindnet-497z4\" (UID: \"99e6305c-fb9e-4f10-9746-3dfdd03c570a\") " pod="kube-system/kindnet-497z4"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.862328     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99e6305c-fb9e-4f10-9746-3dfdd03c570a-xtables-lock\") pod \"kindnet-497z4\" (UID: \"99e6305c-fb9e-4f10-9746-3dfdd03c570a\") " pod="kube-system/kindnet-497z4"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.862470     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99e6305c-fb9e-4f10-9746-3dfdd03c570a-lib-modules\") pod \"kindnet-497z4\" (UID: \"99e6305c-fb9e-4f10-9746-3dfdd03c570a\") " pod="kube-system/kindnet-497z4"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.862551     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f4233c2-bc5d-452a-84e3-875564801a54-xtables-lock\") pod \"kube-proxy-k8ljb\" (UID: \"2f4233c2-bc5d-452a-84e3-875564801a54\") " pod="kube-system/kube-proxy-k8ljb"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.862711     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f4233c2-bc5d-452a-84e3-875564801a54-lib-modules\") pod \"kube-proxy-k8ljb\" (UID: \"2f4233c2-bc5d-452a-84e3-875564801a54\") " pod="kube-system/kube-proxy-k8ljb"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.925275     729 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 18:25:45 newest-cni-530891 kubelet[729]: W1018 18:25:45.270161     729 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e/crio-e650f9b6f354349ddf1c0de73f3dffa9634229520070d7ad90209cc8a1e4e121 WatchSource:0}: Error finding container e650f9b6f354349ddf1c0de73f3dffa9634229520070d7ad90209cc8a1e4e121: Status 404 returned error can't find the container with id e650f9b6f354349ddf1c0de73f3dffa9634229520070d7ad90209cc8a1e4e121
	Oct 18 18:25:45 newest-cni-530891 kubelet[729]: W1018 18:25:45.370894     729 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e/crio-7702d7866c44b245fb5125de0b275f00d916abb79da175e3c60b294e481e0588 WatchSource:0}: Error finding container 7702d7866c44b245fb5125de0b275f00d916abb79da175e3c60b294e481e0588: Status 404 returned error can't find the container with id 7702d7866c44b245fb5125de0b275f00d916abb79da175e3c60b294e481e0588
	Oct 18 18:25:47 newest-cni-530891 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 18:25:47 newest-cni-530891 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 18:25:47 newest-cni-530891 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-530891 -n newest-cni-530891
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-530891 -n newest-cni-530891: exit status 2 (433.647559ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-530891 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-brzb4 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-w9bmc kubernetes-dashboard-855c9754f9-w4sx6
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-530891 describe pod coredns-66bc5c9577-brzb4 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-w9bmc kubernetes-dashboard-855c9754f9-w4sx6
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-530891 describe pod coredns-66bc5c9577-brzb4 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-w9bmc kubernetes-dashboard-855c9754f9-w4sx6: exit status 1 (98.92092ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-brzb4" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-w9bmc" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-w4sx6" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-530891 describe pod coredns-66bc5c9577-brzb4 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-w9bmc kubernetes-dashboard-855c9754f9-w4sx6: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-530891
helpers_test.go:243: (dbg) docker inspect newest-cni-530891:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e",
	        "Created": "2025-10-18T18:24:51.961915069Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 219520,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T18:25:31.106798134Z",
	            "FinishedAt": "2025-10-18T18:25:29.990210321Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e/hostname",
	        "HostsPath": "/var/lib/docker/containers/592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e/hosts",
	        "LogPath": "/var/lib/docker/containers/592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e/592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e-json.log",
	        "Name": "/newest-cni-530891",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-530891:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-530891",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e",
	                "LowerDir": "/var/lib/docker/overlay2/6123219178d0089d945290d8d54993696ba0db05146a36d826c912c6a71dea18-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6123219178d0089d945290d8d54993696ba0db05146a36d826c912c6a71dea18/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6123219178d0089d945290d8d54993696ba0db05146a36d826c912c6a71dea18/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6123219178d0089d945290d8d54993696ba0db05146a36d826c912c6a71dea18/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-530891",
	                "Source": "/var/lib/docker/volumes/newest-cni-530891/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-530891",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-530891",
	                "name.minikube.sigs.k8s.io": "newest-cni-530891",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "15db1831ec8a0c62ce080c0e2ce16a92530b87bb8835e505562d09c8ef6a6dae",
	            "SandboxKey": "/var/run/docker/netns/15db1831ec8a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-530891": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:25:b2:fe:a2:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "430d25fc02daf5d96f95a7e706911a3c6ed05a1ed551d0fc6d07a2b7559606cd",
	                    "EndpointID": "0d42b0b0458d252cbef047f36c49997e25d582b0093042a23b94eca0223fb645",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-530891",
	                        "592c46465c1a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-530891 -n newest-cni-530891
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-530891 -n newest-cni-530891: exit status 2 (437.097925ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-530891 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-530891 logs -n 25: (1.569773443s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-213943 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │                     │
	│ stop    │ -p embed-certs-213943 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │ 18 Oct 25 18:23 UTC │
	│ addons  │ enable dashboard -p embed-certs-213943 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │ 18 Oct 25 18:23 UTC │
	│ start   │ -p embed-certs-213943 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:23 UTC │ 18 Oct 25 18:24 UTC │
	│ image   │ default-k8s-diff-port-192562 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ pause   │ -p default-k8s-diff-port-192562 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-192562                                                                                                                                                                                                               │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ delete  │ -p default-k8s-diff-port-192562                                                                                                                                                                                                               │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ delete  │ -p disable-driver-mounts-747178                                                                                                                                                                                                               │ disable-driver-mounts-747178 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ start   │ -p no-preload-729957 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:25 UTC │
	│ image   │ embed-certs-213943 image list --format=json                                                                                                                                                                                                   │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ pause   │ -p embed-certs-213943 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │                     │
	│ delete  │ -p embed-certs-213943                                                                                                                                                                                                                         │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ delete  │ -p embed-certs-213943                                                                                                                                                                                                                         │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ start   │ -p newest-cni-530891 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:25 UTC │
	│ addons  │ enable metrics-server -p newest-cni-530891 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-729957 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │                     │
	│ stop    │ -p newest-cni-530891 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ addons  │ enable dashboard -p newest-cni-530891 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ start   │ -p newest-cni-530891 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ stop    │ -p no-preload-729957 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ addons  │ enable dashboard -p no-preload-729957 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ start   │ -p no-preload-729957 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │                     │
	│ image   │ newest-cni-530891 image list --format=json                                                                                                                                                                                                    │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ pause   │ -p newest-cni-530891 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 18:25:44
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 18:25:44.243603  221240 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:25:44.244215  221240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:25:44.244248  221240 out.go:374] Setting ErrFile to fd 2...
	I1018 18:25:44.244268  221240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:25:44.244564  221240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:25:44.245023  221240 out.go:368] Setting JSON to false
	I1018 18:25:44.245958  221240 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7694,"bootTime":1760804251,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 18:25:44.246049  221240 start.go:141] virtualization:  
	I1018 18:25:44.249674  221240 out.go:179] * [no-preload-729957] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 18:25:44.252771  221240 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 18:25:44.252842  221240 notify.go:220] Checking for updates...
	I1018 18:25:44.258668  221240 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 18:25:44.261705  221240 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:25:44.264694  221240 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 18:25:44.267873  221240 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 18:25:44.270758  221240 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 18:25:44.274150  221240 config.go:182] Loaded profile config "no-preload-729957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:25:44.274690  221240 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 18:25:44.328645  221240 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 18:25:44.328761  221240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:25:44.441712  221240 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 18:25:44.428626954 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:25:44.441812  221240 docker.go:318] overlay module found
	I1018 18:25:44.444903  221240 out.go:179] * Using the docker driver based on existing profile
	I1018 18:25:44.447726  221240 start.go:305] selected driver: docker
	I1018 18:25:44.447752  221240 start.go:925] validating driver "docker" against &{Name:no-preload-729957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-729957 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:25:44.447862  221240 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 18:25:44.448563  221240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:25:44.564739  221240 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 18:25:44.554399962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:25:44.565158  221240 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:25:44.565190  221240 cni.go:84] Creating CNI manager for ""
	I1018 18:25:44.565243  221240 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:25:44.565276  221240 start.go:349] cluster config:
	{Name:no-preload-729957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-729957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:25:44.568532  221240 out.go:179] * Starting "no-preload-729957" primary control-plane node in "no-preload-729957" cluster
	I1018 18:25:44.571441  221240 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 18:25:44.574404  221240 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 18:25:44.577298  221240 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:25:44.577439  221240 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/config.json ...
	I1018 18:25:44.577784  221240 cache.go:107] acquiring lock: {Name:mkfe0c95c3696c6ee6d6bee7d1ad713b9bd021b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:25:44.577860  221240 cache.go:115] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 18:25:44.577867  221240 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 97.683µs
	I1018 18:25:44.577876  221240 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 18:25:44.577887  221240 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 18:25:44.578124  221240 cache.go:107] acquiring lock: {Name:mk2fda38822643b1c863eb02b4b58b1c8beea2d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:25:44.578187  221240 cache.go:115] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 18:25:44.578194  221240 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 76.604µs
	I1018 18:25:44.578201  221240 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 18:25:44.578212  221240 cache.go:107] acquiring lock: {Name:mkd26b3798aaf66fcad945e0c1a60f0824366e40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:25:44.578255  221240 cache.go:115] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 18:25:44.578261  221240 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 51.324µs
	I1018 18:25:44.578268  221240 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 18:25:44.578277  221240 cache.go:107] acquiring lock: {Name:mkd3282648be7d83ac0e67296042440acb53052b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:25:44.578304  221240 cache.go:115] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 18:25:44.578309  221240 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 33.231µs
	I1018 18:25:44.578315  221240 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 18:25:44.578323  221240 cache.go:107] acquiring lock: {Name:mk6a37c53550d30a6c5a6027e63e35937896f954 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:25:44.578349  221240 cache.go:115] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 18:25:44.578354  221240 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 31.5µs
	I1018 18:25:44.578360  221240 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 18:25:44.578370  221240 cache.go:107] acquiring lock: {Name:mka02bf3e7fa031efb5dd0162aedd881c5c29af2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:25:44.578394  221240 cache.go:115] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 18:25:44.578399  221240 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 30.359µs
	I1018 18:25:44.578405  221240 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 18:25:44.578413  221240 cache.go:107] acquiring lock: {Name:mk3a776414901f1896d41bf7105926b8db2f104a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:25:44.578437  221240 cache.go:115] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1018 18:25:44.578443  221240 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 30.417µs
	I1018 18:25:44.578448  221240 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 18:25:44.578458  221240 cache.go:107] acquiring lock: {Name:mke59697c6719748ff18c4e99b2595c9da08adaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:25:44.578485  221240 cache.go:115] /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 18:25:44.578489  221240 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 32.534µs
	I1018 18:25:44.578495  221240 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 18:25:44.578501  221240 cache.go:87] Successfully saved all images to host disk.
	I1018 18:25:44.606298  221240 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 18:25:44.606317  221240 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 18:25:44.606330  221240 cache.go:232] Successfully downloaded all kic artifacts
	I1018 18:25:44.606353  221240 start.go:360] acquireMachinesLock for no-preload-729957: {Name:mke750361707948cde27a747cd8852fabeab5692 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:25:44.606402  221240 start.go:364] duration metric: took 35.357µs to acquireMachinesLock for "no-preload-729957"
	I1018 18:25:44.606420  221240 start.go:96] Skipping create...Using existing machine configuration
	I1018 18:25:44.606425  221240 fix.go:54] fixHost starting: 
	I1018 18:25:44.606684  221240 cli_runner.go:164] Run: docker container inspect no-preload-729957 --format={{.State.Status}}
	I1018 18:25:44.633508  221240 fix.go:112] recreateIfNeeded on no-preload-729957: state=Stopped err=<nil>
	W1018 18:25:44.633537  221240 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 18:25:46.366633  219338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.430814406s)
	I1018 18:25:46.366718  219338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.406395859s)
	I1018 18:25:46.367034  219338 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (7.389734566s)
	I1018 18:25:46.367059  219338 api_server.go:72] duration metric: took 7.788351789s to wait for apiserver process to appear ...
	I1018 18:25:46.367069  219338 api_server.go:88] waiting for apiserver healthz status ...
	I1018 18:25:46.367082  219338 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 18:25:46.367361  219338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.958665898s)
	I1018 18:25:46.370294  219338 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-530891 addons enable metrics-server
	
	I1018 18:25:46.386317  219338 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 18:25:46.387364  219338 api_server.go:141] control plane version: v1.34.1
	I1018 18:25:46.387390  219338 api_server.go:131] duration metric: took 20.3146ms to wait for apiserver health ...
	I1018 18:25:46.387400  219338 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 18:25:46.394955  219338 system_pods.go:59] 8 kube-system pods found
	I1018 18:25:46.394993  219338 system_pods.go:61] "coredns-66bc5c9577-brzb4" [762df58f-b70f-479e-b130-07c24a8f3f51] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 18:25:46.395003  219338 system_pods.go:61] "etcd-newest-cni-530891" [1cf783e9-928f-47f5-be9d-4df2479e9b31] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 18:25:46.395009  219338 system_pods.go:61] "kindnet-497z4" [99e6305c-fb9e-4f10-9746-3dfdd03c570a] Running
	I1018 18:25:46.395026  219338 system_pods.go:61] "kube-apiserver-newest-cni-530891" [b43d0e4b-98c3-4c5e-96dc-4ab8c7913e63] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 18:25:46.395033  219338 system_pods.go:61] "kube-controller-manager-newest-cni-530891" [0d687e21-ef2f-4a67-94ea-d40750239b57] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 18:25:46.395038  219338 system_pods.go:61] "kube-proxy-k8ljb" [2f4233c2-bc5d-452a-84e3-875564801a54] Running
	I1018 18:25:46.395045  219338 system_pods.go:61] "kube-scheduler-newest-cni-530891" [a81c1ce2-edc2-4f88-aebd-d06916133c38] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 18:25:46.395052  219338 system_pods.go:61] "storage-provisioner" [b2348e9f-6e43-4f09-a0c0-01ab697d968a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 18:25:46.395058  219338 system_pods.go:74] duration metric: took 7.644464ms to wait for pod list to return data ...
	I1018 18:25:46.395072  219338 default_sa.go:34] waiting for default service account to be created ...
	I1018 18:25:46.398654  219338 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1018 18:25:46.400005  219338 default_sa.go:45] found service account: "default"
	I1018 18:25:46.400025  219338 default_sa.go:55] duration metric: took 4.947313ms for default service account to be created ...
	I1018 18:25:46.400040  219338 kubeadm.go:586] duration metric: took 7.821329691s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 18:25:46.400057  219338 node_conditions.go:102] verifying NodePressure condition ...
	I1018 18:25:46.401475  219338 addons.go:514] duration metric: took 7.822395461s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 18:25:46.414466  219338 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 18:25:46.414508  219338 node_conditions.go:123] node cpu capacity is 2
	I1018 18:25:46.414521  219338 node_conditions.go:105] duration metric: took 14.458861ms to run NodePressure ...
	I1018 18:25:46.414533  219338 start.go:241] waiting for startup goroutines ...
	I1018 18:25:46.414540  219338 start.go:246] waiting for cluster config update ...
	I1018 18:25:46.414551  219338 start.go:255] writing updated cluster config ...
	I1018 18:25:46.414882  219338 ssh_runner.go:195] Run: rm -f paused
	I1018 18:25:46.501941  219338 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 18:25:46.505066  219338 out.go:179] * Done! kubectl is now configured to use "newest-cni-530891" cluster and "default" namespace by default
	I1018 18:25:44.636630  221240 out.go:252] * Restarting existing docker container for "no-preload-729957" ...
	I1018 18:25:44.636735  221240 cli_runner.go:164] Run: docker start no-preload-729957
	I1018 18:25:45.037168  221240 cli_runner.go:164] Run: docker container inspect no-preload-729957 --format={{.State.Status}}
	I1018 18:25:45.068006  221240 kic.go:430] container "no-preload-729957" state is running.
	I1018 18:25:45.068429  221240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-729957
	I1018 18:25:45.101308  221240 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/config.json ...
	I1018 18:25:45.101578  221240 machine.go:93] provisionDockerMachine start ...
	I1018 18:25:45.101645  221240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:25:45.150841  221240 main.go:141] libmachine: Using SSH client type: native
	I1018 18:25:45.151175  221240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1018 18:25:45.151186  221240 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 18:25:45.155174  221240 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51454->127.0.0.1:33088: read: connection reset by peer
	I1018 18:25:48.349227  221240 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-729957
	
	I1018 18:25:48.349295  221240 ubuntu.go:182] provisioning hostname "no-preload-729957"
	I1018 18:25:48.349374  221240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:25:48.371544  221240 main.go:141] libmachine: Using SSH client type: native
	I1018 18:25:48.371850  221240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1018 18:25:48.371866  221240 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-729957 && echo "no-preload-729957" | sudo tee /etc/hostname
	I1018 18:25:48.538716  221240 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-729957
	
	I1018 18:25:48.538816  221240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:25:48.558087  221240 main.go:141] libmachine: Using SSH client type: native
	I1018 18:25:48.558459  221240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1018 18:25:48.558484  221240 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-729957' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-729957/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-729957' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 18:25:48.705455  221240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 18:25:48.705529  221240 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 18:25:48.705563  221240 ubuntu.go:190] setting up certificates
	I1018 18:25:48.705598  221240 provision.go:84] configureAuth start
	I1018 18:25:48.705694  221240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-729957
	I1018 18:25:48.723622  221240 provision.go:143] copyHostCerts
	I1018 18:25:48.723750  221240 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 18:25:48.723810  221240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 18:25:48.723931  221240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 18:25:48.724045  221240 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 18:25:48.724052  221240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 18:25:48.724079  221240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 18:25:48.724138  221240 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 18:25:48.724143  221240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 18:25:48.724165  221240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 18:25:48.724225  221240 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.no-preload-729957 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-729957]
	I1018 18:25:49.135641  221240 provision.go:177] copyRemoteCerts
	I1018 18:25:49.135759  221240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 18:25:49.135823  221240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:25:49.162993  221240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa Username:docker}
	I1018 18:25:49.266263  221240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 18:25:49.286198  221240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 18:25:49.306449  221240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 18:25:49.326573  221240 provision.go:87] duration metric: took 620.935598ms to configureAuth
	I1018 18:25:49.326639  221240 ubuntu.go:206] setting minikube options for container-runtime
	I1018 18:25:49.326859  221240 config.go:182] Loaded profile config "no-preload-729957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:25:49.326999  221240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:25:49.351931  221240 main.go:141] libmachine: Using SSH client type: native
	I1018 18:25:49.352239  221240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1018 18:25:49.352253  221240 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 18:25:49.742883  221240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 18:25:49.742916  221240 machine.go:96] duration metric: took 4.641327488s to provisionDockerMachine
	I1018 18:25:49.742928  221240 start.go:293] postStartSetup for "no-preload-729957" (driver="docker")
	I1018 18:25:49.742939  221240 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 18:25:49.743009  221240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 18:25:49.743061  221240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:25:49.781962  221240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa Username:docker}
	I1018 18:25:49.898488  221240 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 18:25:49.902884  221240 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 18:25:49.902920  221240 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 18:25:49.902932  221240 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 18:25:49.902995  221240 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 18:25:49.903082  221240 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 18:25:49.903198  221240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 18:25:49.914876  221240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:25:49.939057  221240 start.go:296] duration metric: took 196.113712ms for postStartSetup
	I1018 18:25:49.939193  221240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 18:25:49.939243  221240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:25:49.961636  221240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa Username:docker}
	I1018 18:25:50.083046  221240 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 18:25:50.090789  221240 fix.go:56] duration metric: took 5.48435602s for fixHost
	I1018 18:25:50.090812  221240 start.go:83] releasing machines lock for "no-preload-729957", held for 5.48440201s
	I1018 18:25:50.090881  221240 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-729957
	I1018 18:25:50.110120  221240 ssh_runner.go:195] Run: cat /version.json
	I1018 18:25:50.110180  221240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:25:50.110459  221240 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 18:25:50.110525  221240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:25:50.135472  221240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa Username:docker}
	I1018 18:25:50.143065  221240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa Username:docker}
	I1018 18:25:50.261138  221240 ssh_runner.go:195] Run: systemctl --version
	I1018 18:25:50.362319  221240 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 18:25:50.411821  221240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 18:25:50.416794  221240 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 18:25:50.416876  221240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 18:25:50.427473  221240 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 18:25:50.427496  221240 start.go:495] detecting cgroup driver to use...
	I1018 18:25:50.427527  221240 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 18:25:50.427578  221240 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 18:25:50.446907  221240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 18:25:50.466220  221240 docker.go:218] disabling cri-docker service (if available) ...
	I1018 18:25:50.466287  221240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 18:25:50.487084  221240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 18:25:50.503231  221240 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 18:25:50.656115  221240 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 18:25:50.805117  221240 docker.go:234] disabling docker service ...
	I1018 18:25:50.805202  221240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 18:25:50.820680  221240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 18:25:50.837116  221240 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 18:25:50.990804  221240 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 18:25:51.143393  221240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 18:25:51.162279  221240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 18:25:51.195433  221240 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 18:25:51.195509  221240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:25:51.205034  221240 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 18:25:51.205096  221240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:25:51.215663  221240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:25:51.228062  221240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:25:51.243515  221240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 18:25:51.252211  221240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:25:51.267931  221240 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:25:51.278520  221240 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:25:51.295787  221240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 18:25:51.304862  221240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 18:25:51.316730  221240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:25:51.458716  221240 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 18:25:51.640567  221240 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 18:25:51.640654  221240 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 18:25:51.644920  221240 start.go:563] Will wait 60s for crictl version
	I1018 18:25:51.645026  221240 ssh_runner.go:195] Run: which crictl
	I1018 18:25:51.649154  221240 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 18:25:51.688708  221240 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 18:25:51.688802  221240 ssh_runner.go:195] Run: crio --version
	I1018 18:25:51.735836  221240 ssh_runner.go:195] Run: crio --version
	I1018 18:25:51.779157  221240 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.15265575Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.188687911Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a0d3fe6d-bb31-4f7f-b47f-3543b15bafd3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.189532557Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-k8ljb/POD" id=6ec66d9e-4cef-4a32-b97f-c8ca8b4a00ac name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.189839441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.256337758Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6ec66d9e-4cef-4a32-b97f-c8ca8b4a00ac name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.273060705Z" level=info msg="Ran pod sandbox e650f9b6f354349ddf1c0de73f3dffa9634229520070d7ad90209cc8a1e4e121 with infra container: kube-system/kindnet-497z4/POD" id=a0d3fe6d-bb31-4f7f-b47f-3543b15bafd3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.289398606Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=300bbb88-6bc8-4cea-8e36-6c22afee0d7e name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.294716877Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d09cf8ed-f957-44c5-a937-aae03e354f7c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.29661163Z" level=info msg="Creating container: kube-system/kindnet-497z4/kindnet-cni" id=9b529e55-a47a-44df-ac48-b3d9471a41d4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.297315672Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.373166773Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.38408368Z" level=info msg="Ran pod sandbox 7702d7866c44b245fb5125de0b275f00d916abb79da175e3c60b294e481e0588 with infra container: kube-system/kube-proxy-k8ljb/POD" id=6ec66d9e-4cef-4a32-b97f-c8ca8b4a00ac name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.384480149Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.399464358Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=4e488749-7065-44cf-92fa-45cd42d533f7 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.403736545Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=21fd1537-4b00-4523-9802-ec50676b2fa3 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.409983411Z" level=info msg="Creating container: kube-system/kube-proxy-k8ljb/kube-proxy" id=cf5ea5f6-0891-46d9-9f90-222b04462ac6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.411780324Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.451351787Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.473321046Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.478475074Z" level=info msg="Created container f997a48b38311515b3f32540bec213d00420efe5b85e8623f1d6b06b790cb4d5: kube-system/kindnet-497z4/kindnet-cni" id=9b529e55-a47a-44df-ac48-b3d9471a41d4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.487793832Z" level=info msg="Starting container: f997a48b38311515b3f32540bec213d00420efe5b85e8623f1d6b06b790cb4d5" id=ec147c13-d68e-47cc-89e5-90bc1a26c99c name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.501293829Z" level=info msg="Started container" PID=1059 containerID=f997a48b38311515b3f32540bec213d00420efe5b85e8623f1d6b06b790cb4d5 description=kube-system/kindnet-497z4/kindnet-cni id=ec147c13-d68e-47cc-89e5-90bc1a26c99c name=/runtime.v1.RuntimeService/StartContainer sandboxID=e650f9b6f354349ddf1c0de73f3dffa9634229520070d7ad90209cc8a1e4e121
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.597321173Z" level=info msg="Created container 7745e67dd4bf1d9271d40d2bd20bea1d3022540aef104f3104f75cb3302bdabe: kube-system/kube-proxy-k8ljb/kube-proxy" id=cf5ea5f6-0891-46d9-9f90-222b04462ac6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.616634817Z" level=info msg="Starting container: 7745e67dd4bf1d9271d40d2bd20bea1d3022540aef104f3104f75cb3302bdabe" id=882ee054-6fb1-4b32-8cd9-734f965eb101 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:25:45 newest-cni-530891 crio[612]: time="2025-10-18T18:25:45.631763028Z" level=info msg="Started container" PID=1068 containerID=7745e67dd4bf1d9271d40d2bd20bea1d3022540aef104f3104f75cb3302bdabe description=kube-system/kube-proxy-k8ljb/kube-proxy id=882ee054-6fb1-4b32-8cd9-734f965eb101 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7702d7866c44b245fb5125de0b275f00d916abb79da175e3c60b294e481e0588
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	7745e67dd4bf1       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 seconds ago       Running             kube-proxy                1                   7702d7866c44b       kube-proxy-k8ljb                            kube-system
	f997a48b38311       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 seconds ago       Running             kindnet-cni               1                   e650f9b6f3543       kindnet-497z4                               kube-system
	06cd45c57fb1f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   14 seconds ago      Running             kube-apiserver            1                   1ace07e34915a       kube-apiserver-newest-cni-530891            kube-system
	db7e052bfb458       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   14 seconds ago      Running             kube-scheduler            1                   6722caf17c9df       kube-scheduler-newest-cni-530891            kube-system
	b8704a5aac61c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   14 seconds ago      Running             etcd                      1                   f33a61f5fd445       etcd-newest-cni-530891                      kube-system
	e3bfcde1f4a17       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   14 seconds ago      Running             kube-controller-manager   1                   3c3452a0378a2       kube-controller-manager-newest-cni-530891   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-530891
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-530891
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=newest-cni-530891
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T18_25_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 18:25:15 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-530891
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 18:25:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 18:25:44 +0000   Sat, 18 Oct 2025 18:25:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 18:25:44 +0000   Sat, 18 Oct 2025 18:25:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 18:25:44 +0000   Sat, 18 Oct 2025 18:25:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 18:25:44 +0000   Sat, 18 Oct 2025 18:25:11 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-530891
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                4a5cb23d-033c-4f7d-ae76-6a54d50540e5
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-530891                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-497z4                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-530891             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-newest-cni-530891    200m (10%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-k8ljb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-530891             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  42s (x8 over 42s)  kubelet          Node newest-cni-530891 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    42s (x8 over 42s)  kubelet          Node newest-cni-530891 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     42s (x8 over 42s)  kubelet          Node newest-cni-530891 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node newest-cni-530891 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node newest-cni-530891 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     35s                kubelet          Node newest-cni-530891 status is now: NodeHasSufficientPID
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           30s                node-controller  Node newest-cni-530891 event: Registered Node newest-cni-530891 in Controller
	  Normal   Starting                 16s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15s (x8 over 16s)  kubelet          Node newest-cni-530891 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x8 over 16s)  kubelet          Node newest-cni-530891 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15s (x8 over 16s)  kubelet          Node newest-cni-530891 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6s                 node-controller  Node newest-cni-530891 event: Registered Node newest-cni-530891 in Controller
	
	
	==> dmesg <==
	[Oct18 18:05] overlayfs: idmapped layers are currently not supported
	[ +25.128760] overlayfs: idmapped layers are currently not supported
	[Oct18 18:06] overlayfs: idmapped layers are currently not supported
	[Oct18 18:07] overlayfs: idmapped layers are currently not supported
	[Oct18 18:08] overlayfs: idmapped layers are currently not supported
	[Oct18 18:09] overlayfs: idmapped layers are currently not supported
	[Oct18 18:11] overlayfs: idmapped layers are currently not supported
	[Oct18 18:13] overlayfs: idmapped layers are currently not supported
	[ +30.969240] overlayfs: idmapped layers are currently not supported
	[Oct18 18:15] overlayfs: idmapped layers are currently not supported
	[Oct18 18:16] overlayfs: idmapped layers are currently not supported
	[Oct18 18:17] overlayfs: idmapped layers are currently not supported
	[ +23.167826] overlayfs: idmapped layers are currently not supported
	[Oct18 18:18] overlayfs: idmapped layers are currently not supported
	[ +38.509809] overlayfs: idmapped layers are currently not supported
	[Oct18 18:19] overlayfs: idmapped layers are currently not supported
	[Oct18 18:21] overlayfs: idmapped layers are currently not supported
	[Oct18 18:22] overlayfs: idmapped layers are currently not supported
	[Oct18 18:23] overlayfs: idmapped layers are currently not supported
	[ +30.822562] overlayfs: idmapped layers are currently not supported
	[Oct18 18:24] bpfilter: read fail -512
	[ +10.607871] overlayfs: idmapped layers are currently not supported
	[Oct18 18:25] overlayfs: idmapped layers are currently not supported
	[ +26.762544] overlayfs: idmapped layers are currently not supported
	[ +14.684259] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b8704a5aac61ca586b4c30874de5bf81d5cdaae0f243e3ef02e446567cf0f0de] <==
	{"level":"warn","ts":"2025-10-18T18:25:41.978593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:41.989996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.007526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.035776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.054849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.075784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.114576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.143377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.155357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.181006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.201858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.214346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.235304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.252622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.276204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.287903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.306320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.330632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.348901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.364352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.382880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.416499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.474540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.491950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:42.548203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39150","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:25:53 up  2:08,  0 user,  load average: 3.86, 3.35, 2.91
	Linux newest-cni-530891 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f997a48b38311515b3f32540bec213d00420efe5b85e8623f1d6b06b790cb4d5] <==
	I1018 18:25:45.692090       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 18:25:45.692298       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 18:25:45.692395       1 main.go:148] setting mtu 1500 for CNI 
	I1018 18:25:45.692407       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 18:25:45.692418       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T18:25:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 18:25:45.908970       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 18:25:45.908988       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 18:25:45.909005       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 18:25:45.909949       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [06cd45c57fb1f9231e7e055195a58f25283c5dfad82d2b594c49d2f914affbb1] <==
	I1018 18:25:43.826792       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 18:25:43.836044       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 18:25:43.836096       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 18:25:43.836318       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 18:25:43.836395       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 18:25:43.836440       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 18:25:43.863061       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 18:25:43.869948       1 aggregator.go:171] initial CRD sync complete...
	I1018 18:25:43.869983       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 18:25:43.869991       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 18:25:43.869999       1 cache.go:39] Caches are synced for autoregister controller
	I1018 18:25:43.920909       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 18:25:43.940552       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1018 18:25:44.002863       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 18:25:44.314600       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 18:25:44.881608       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 18:25:45.794481       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 18:25:46.085815       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 18:25:46.183617       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 18:25:46.204761       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 18:25:46.308600       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.61.220"}
	I1018 18:25:46.325147       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.10.118"}
	I1018 18:25:47.983187       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 18:25:48.210602       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 18:25:48.331721       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [e3bfcde1f4a1727da6087f51f60474aa425e63692284f02e03415c5e14f663ce] <==
	I1018 18:25:47.808003       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 18:25:47.813199       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 18:25:47.821465       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 18:25:47.825212       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 18:25:47.825221       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 18:25:47.825239       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 18:25:47.825706       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 18:25:47.827464       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 18:25:47.830731       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 18:25:47.830840       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 18:25:47.834060       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 18:25:47.834087       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 18:25:47.834094       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 18:25:47.837108       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 18:25:47.837194       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 18:25:47.837222       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 18:25:47.837233       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 18:25:47.837239       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 18:25:47.840057       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 18:25:47.842204       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 18:25:47.845535       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 18:25:47.849839       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-530891"
	I1018 18:25:47.850747       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 18:25:47.862213       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 18:25:47.881643       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [7745e67dd4bf1d9271d40d2bd20bea1d3022540aef104f3104f75cb3302bdabe] <==
	I1018 18:25:46.205369       1 server_linux.go:53] "Using iptables proxy"
	I1018 18:25:46.398235       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 18:25:46.498996       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 18:25:46.499082       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 18:25:46.499209       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 18:25:46.524419       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 18:25:46.524529       1 server_linux.go:132] "Using iptables Proxier"
	I1018 18:25:46.595115       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 18:25:46.595474       1 server.go:527] "Version info" version="v1.34.1"
	I1018 18:25:46.595490       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:25:46.597327       1 config.go:200] "Starting service config controller"
	I1018 18:25:46.597409       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 18:25:46.597453       1 config.go:106] "Starting endpoint slice config controller"
	I1018 18:25:46.597497       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 18:25:46.597535       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 18:25:46.597568       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 18:25:46.598196       1 config.go:309] "Starting node config controller"
	I1018 18:25:46.600902       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 18:25:46.601108       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 18:25:46.698084       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 18:25:46.698125       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 18:25:46.698167       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [db7e052bfb458a74f9e888e8ffa588ed16db1f7552c47499d1b30765c74fcce9] <==
	I1018 18:25:41.180464       1 serving.go:386] Generated self-signed cert in-memory
	I1018 18:25:45.550044       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 18:25:45.550073       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:25:45.571755       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 18:25:45.571841       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 18:25:45.571861       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 18:25:45.571908       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 18:25:45.591004       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:25:45.591025       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:25:45.591045       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:25:45.591051       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:25:45.674105       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 18:25:45.692870       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:25:45.692956       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 18:25:43 newest-cni-530891 kubelet[729]: I1018 18:25:43.946648     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-530891"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.003072     729 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-530891"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.003235     729 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-530891"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.003313     729 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.007100     729 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: E1018 18:25:44.020734     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-530891\" already exists" pod="kube-system/etcd-newest-cni-530891"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.020784     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-530891"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: E1018 18:25:44.101744     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-530891\" already exists" pod="kube-system/kube-apiserver-newest-cni-530891"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.101776     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-530891"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: E1018 18:25:44.123136     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-530891\" already exists" pod="kube-system/kube-controller-manager-newest-cni-530891"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.123170     729 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-530891"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: E1018 18:25:44.166038     729 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-530891\" already exists" pod="kube-system/kube-scheduler-newest-cni-530891"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.821727     729 apiserver.go:52] "Watching apiserver"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.849856     729 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.862116     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/99e6305c-fb9e-4f10-9746-3dfdd03c570a-cni-cfg\") pod \"kindnet-497z4\" (UID: \"99e6305c-fb9e-4f10-9746-3dfdd03c570a\") " pod="kube-system/kindnet-497z4"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.862328     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99e6305c-fb9e-4f10-9746-3dfdd03c570a-xtables-lock\") pod \"kindnet-497z4\" (UID: \"99e6305c-fb9e-4f10-9746-3dfdd03c570a\") " pod="kube-system/kindnet-497z4"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.862470     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99e6305c-fb9e-4f10-9746-3dfdd03c570a-lib-modules\") pod \"kindnet-497z4\" (UID: \"99e6305c-fb9e-4f10-9746-3dfdd03c570a\") " pod="kube-system/kindnet-497z4"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.862551     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f4233c2-bc5d-452a-84e3-875564801a54-xtables-lock\") pod \"kube-proxy-k8ljb\" (UID: \"2f4233c2-bc5d-452a-84e3-875564801a54\") " pod="kube-system/kube-proxy-k8ljb"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.862711     729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f4233c2-bc5d-452a-84e3-875564801a54-lib-modules\") pod \"kube-proxy-k8ljb\" (UID: \"2f4233c2-bc5d-452a-84e3-875564801a54\") " pod="kube-system/kube-proxy-k8ljb"
	Oct 18 18:25:44 newest-cni-530891 kubelet[729]: I1018 18:25:44.925275     729 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 18:25:45 newest-cni-530891 kubelet[729]: W1018 18:25:45.270161     729 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e/crio-e650f9b6f354349ddf1c0de73f3dffa9634229520070d7ad90209cc8a1e4e121 WatchSource:0}: Error finding container e650f9b6f354349ddf1c0de73f3dffa9634229520070d7ad90209cc8a1e4e121: Status 404 returned error can't find the container with id e650f9b6f354349ddf1c0de73f3dffa9634229520070d7ad90209cc8a1e4e121
	Oct 18 18:25:45 newest-cni-530891 kubelet[729]: W1018 18:25:45.370894     729 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/592c46465c1aa48efe97f2b3db6c46c918fe8e6fb44a63deec22e7bb1784c31e/crio-7702d7866c44b245fb5125de0b275f00d916abb79da175e3c60b294e481e0588 WatchSource:0}: Error finding container 7702d7866c44b245fb5125de0b275f00d916abb79da175e3c60b294e481e0588: Status 404 returned error can't find the container with id 7702d7866c44b245fb5125de0b275f00d916abb79da175e3c60b294e481e0588
	Oct 18 18:25:47 newest-cni-530891 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 18:25:47 newest-cni-530891 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 18:25:47 newest-cni-530891 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-530891 -n newest-cni-530891
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-530891 -n newest-cni-530891: exit status 2 (622.542684ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-530891 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-brzb4 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-w9bmc kubernetes-dashboard-855c9754f9-w4sx6
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-530891 describe pod coredns-66bc5c9577-brzb4 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-w9bmc kubernetes-dashboard-855c9754f9-w4sx6
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-530891 describe pod coredns-66bc5c9577-brzb4 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-w9bmc kubernetes-dashboard-855c9754f9-w4sx6: exit status 1 (144.267982ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-brzb4" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-w9bmc" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-w4sx6" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-530891 describe pod coredns-66bc5c9577-brzb4 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-w9bmc kubernetes-dashboard-855c9754f9-w4sx6: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-729957 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-729957 --alsologtostderr -v=1: exit status 80 (1.791411257s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-729957 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 18:26:48.207955  227465 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:26:48.208100  227465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:26:48.208124  227465 out.go:374] Setting ErrFile to fd 2...
	I1018 18:26:48.208146  227465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:26:48.208424  227465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:26:48.210185  227465 out.go:368] Setting JSON to false
	I1018 18:26:48.210241  227465 mustload.go:65] Loading cluster: no-preload-729957
	I1018 18:26:48.210640  227465 config.go:182] Loaded profile config "no-preload-729957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:26:48.211171  227465 cli_runner.go:164] Run: docker container inspect no-preload-729957 --format={{.State.Status}}
	I1018 18:26:48.232095  227465 host.go:66] Checking if "no-preload-729957" exists ...
	I1018 18:26:48.232410  227465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:26:48.284187  227465 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-18 18:26:48.274189429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:26:48.284887  227465 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-729957 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 18:26:48.288453  227465 out.go:179] * Pausing node no-preload-729957 ... 
	I1018 18:26:48.291404  227465 host.go:66] Checking if "no-preload-729957" exists ...
	I1018 18:26:48.291745  227465 ssh_runner.go:195] Run: systemctl --version
	I1018 18:26:48.291807  227465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-729957
	I1018 18:26:48.310981  227465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/no-preload-729957/id_rsa Username:docker}
	I1018 18:26:48.419821  227465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:26:48.435276  227465 pause.go:52] kubelet running: true
	I1018 18:26:48.435397  227465 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 18:26:48.720325  227465 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 18:26:48.720408  227465 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 18:26:48.789415  227465 cri.go:89] found id: "f8531bead1ef2e3eada16cba59c589956dc484ab1cff2f47a411a5f3dbd97427"
	I1018 18:26:48.789438  227465 cri.go:89] found id: "0fa470fa642abc5faf16ee6eb2a3332179be9e9bd3853405ee4a917524746026"
	I1018 18:26:48.789445  227465 cri.go:89] found id: "5f5f900bb1fe229f3538acdd9a0c3aad246dff6301bfc684afdfd990ab97fe94"
	I1018 18:26:48.789448  227465 cri.go:89] found id: "8cbe09ef34bf4eef049fd6b1f047b6b1c569dbdbffc529ac4e6883ef231e2b93"
	I1018 18:26:48.789452  227465 cri.go:89] found id: "2a3d029247f0f2961084715b69f7ac4e03f5bd09abdd133077b57c82216aefd4"
	I1018 18:26:48.789456  227465 cri.go:89] found id: "a51a8b9c45aa1dd947ab88c80db7d69a3ada1bf2e6ca00bc66384aaccb0ff136"
	I1018 18:26:48.789459  227465 cri.go:89] found id: "6974399a43a070944c3ef86eb0363ba4bca8f5c775d0d5143be212a028542142"
	I1018 18:26:48.789463  227465 cri.go:89] found id: "d2a6df964e5a27a75360411f0fbe62d805660605d883656062b3e9b3c98ffc61"
	I1018 18:26:48.789466  227465 cri.go:89] found id: "b42f50a512a46ed3a6cad329c67f5e35b5354a294a55db5944cfbd20dd29cbd2"
	I1018 18:26:48.789473  227465 cri.go:89] found id: "471127644b325b85c5c10f6876205a690ae590a617ae0a3345a5d15788948065"
	I1018 18:26:48.789476  227465 cri.go:89] found id: "7624f6abd459809bd3046f1c044b4a4b33cb3de17198c331adda43e222af9966"
	I1018 18:26:48.789479  227465 cri.go:89] found id: ""
	I1018 18:26:48.789525  227465 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 18:26:48.809470  227465 retry.go:31] will retry after 213.038502ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:26:48Z" level=error msg="open /run/runc: no such file or directory"
	I1018 18:26:49.022756  227465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:26:49.037685  227465 pause.go:52] kubelet running: false
	I1018 18:26:49.037774  227465 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 18:26:49.221802  227465 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 18:26:49.221879  227465 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 18:26:49.291073  227465 cri.go:89] found id: "f8531bead1ef2e3eada16cba59c589956dc484ab1cff2f47a411a5f3dbd97427"
	I1018 18:26:49.291098  227465 cri.go:89] found id: "0fa470fa642abc5faf16ee6eb2a3332179be9e9bd3853405ee4a917524746026"
	I1018 18:26:49.291103  227465 cri.go:89] found id: "5f5f900bb1fe229f3538acdd9a0c3aad246dff6301bfc684afdfd990ab97fe94"
	I1018 18:26:49.291107  227465 cri.go:89] found id: "8cbe09ef34bf4eef049fd6b1f047b6b1c569dbdbffc529ac4e6883ef231e2b93"
	I1018 18:26:49.291110  227465 cri.go:89] found id: "2a3d029247f0f2961084715b69f7ac4e03f5bd09abdd133077b57c82216aefd4"
	I1018 18:26:49.291114  227465 cri.go:89] found id: "a51a8b9c45aa1dd947ab88c80db7d69a3ada1bf2e6ca00bc66384aaccb0ff136"
	I1018 18:26:49.291117  227465 cri.go:89] found id: "6974399a43a070944c3ef86eb0363ba4bca8f5c775d0d5143be212a028542142"
	I1018 18:26:49.291120  227465 cri.go:89] found id: "d2a6df964e5a27a75360411f0fbe62d805660605d883656062b3e9b3c98ffc61"
	I1018 18:26:49.291123  227465 cri.go:89] found id: "b42f50a512a46ed3a6cad329c67f5e35b5354a294a55db5944cfbd20dd29cbd2"
	I1018 18:26:49.291162  227465 cri.go:89] found id: "471127644b325b85c5c10f6876205a690ae590a617ae0a3345a5d15788948065"
	I1018 18:26:49.291173  227465 cri.go:89] found id: "7624f6abd459809bd3046f1c044b4a4b33cb3de17198c331adda43e222af9966"
	I1018 18:26:49.291176  227465 cri.go:89] found id: ""
	I1018 18:26:49.291240  227465 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 18:26:49.302474  227465 retry.go:31] will retry after 351.858925ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:26:49Z" level=error msg="open /run/runc: no such file or directory"
	I1018 18:26:49.654924  227465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:26:49.669462  227465 pause.go:52] kubelet running: false
	I1018 18:26:49.669609  227465 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 18:26:49.848114  227465 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 18:26:49.848212  227465 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 18:26:49.921711  227465 cri.go:89] found id: "f8531bead1ef2e3eada16cba59c589956dc484ab1cff2f47a411a5f3dbd97427"
	I1018 18:26:49.921787  227465 cri.go:89] found id: "0fa470fa642abc5faf16ee6eb2a3332179be9e9bd3853405ee4a917524746026"
	I1018 18:26:49.921807  227465 cri.go:89] found id: "5f5f900bb1fe229f3538acdd9a0c3aad246dff6301bfc684afdfd990ab97fe94"
	I1018 18:26:49.921823  227465 cri.go:89] found id: "8cbe09ef34bf4eef049fd6b1f047b6b1c569dbdbffc529ac4e6883ef231e2b93"
	I1018 18:26:49.921837  227465 cri.go:89] found id: "2a3d029247f0f2961084715b69f7ac4e03f5bd09abdd133077b57c82216aefd4"
	I1018 18:26:49.921854  227465 cri.go:89] found id: "a51a8b9c45aa1dd947ab88c80db7d69a3ada1bf2e6ca00bc66384aaccb0ff136"
	I1018 18:26:49.921883  227465 cri.go:89] found id: "6974399a43a070944c3ef86eb0363ba4bca8f5c775d0d5143be212a028542142"
	I1018 18:26:49.921918  227465 cri.go:89] found id: "d2a6df964e5a27a75360411f0fbe62d805660605d883656062b3e9b3c98ffc61"
	I1018 18:26:49.921935  227465 cri.go:89] found id: "b42f50a512a46ed3a6cad329c67f5e35b5354a294a55db5944cfbd20dd29cbd2"
	I1018 18:26:49.921954  227465 cri.go:89] found id: "471127644b325b85c5c10f6876205a690ae590a617ae0a3345a5d15788948065"
	I1018 18:26:49.921979  227465 cri.go:89] found id: "7624f6abd459809bd3046f1c044b4a4b33cb3de17198c331adda43e222af9966"
	I1018 18:26:49.921997  227465 cri.go:89] found id: ""
	I1018 18:26:49.922060  227465 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 18:26:49.937569  227465 out.go:203] 
	W1018 18:26:49.940514  227465 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:26:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T18:26:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 18:26:49.940535  227465 out.go:285] * 
	* 
	W1018 18:26:49.946109  227465 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 18:26:49.949019  227465 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-729957 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-729957
helpers_test.go:243: (dbg) docker inspect no-preload-729957:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673",
	        "Created": "2025-10-18T18:24:12.31875014Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 221364,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T18:25:44.685537731Z",
	            "FinishedAt": "2025-10-18T18:25:43.457083285Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673/hostname",
	        "HostsPath": "/var/lib/docker/containers/26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673/hosts",
	        "LogPath": "/var/lib/docker/containers/26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673/26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673-json.log",
	        "Name": "/no-preload-729957",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-729957:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-729957",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673",
	                "LowerDir": "/var/lib/docker/overlay2/23e3b3ca1f79e937b59a52dcaa595b90f6276c9c388c3cfb57d1e199b659f3cd-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/23e3b3ca1f79e937b59a52dcaa595b90f6276c9c388c3cfb57d1e199b659f3cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/23e3b3ca1f79e937b59a52dcaa595b90f6276c9c388c3cfb57d1e199b659f3cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/23e3b3ca1f79e937b59a52dcaa595b90f6276c9c388c3cfb57d1e199b659f3cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-729957",
	                "Source": "/var/lib/docker/volumes/no-preload-729957/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-729957",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-729957",
	                "name.minikube.sigs.k8s.io": "no-preload-729957",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "553f21ec8a7c91699b09c42ba4ac9cb745f0346087552b1219544a0a9cff0d07",
	            "SandboxKey": "/var/run/docker/netns/553f21ec8a7c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-729957": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:66:ed:76:2c:92",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9171cfee9247515a7d76872523f6d046330152cbb9ee1a62de7b40aaab7a7a81",
	                    "EndpointID": "a7be7c6c91b648fc85680750cdbea044c402b0173d749aa2b2f1d7ab67845f2b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-729957",
	                        "26cea4068f8d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-729957 -n no-preload-729957
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-729957 -n no-preload-729957: exit status 2 (374.005992ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-729957 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-729957 logs -n 25: (1.394089388s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-192562 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-192562                                                                                                                                                                                                               │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ delete  │ -p default-k8s-diff-port-192562                                                                                                                                                                                                               │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ delete  │ -p disable-driver-mounts-747178                                                                                                                                                                                                               │ disable-driver-mounts-747178 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ start   │ -p no-preload-729957 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:25 UTC │
	│ image   │ embed-certs-213943 image list --format=json                                                                                                                                                                                                   │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ pause   │ -p embed-certs-213943 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │                     │
	│ delete  │ -p embed-certs-213943                                                                                                                                                                                                                         │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ delete  │ -p embed-certs-213943                                                                                                                                                                                                                         │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ start   │ -p newest-cni-530891 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:25 UTC │
	│ addons  │ enable metrics-server -p newest-cni-530891 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-729957 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │                     │
	│ stop    │ -p newest-cni-530891 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ addons  │ enable dashboard -p newest-cni-530891 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ start   │ -p newest-cni-530891 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ stop    │ -p no-preload-729957 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ addons  │ enable dashboard -p no-preload-729957 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ start   │ -p no-preload-729957 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:26 UTC │
	│ image   │ newest-cni-530891 image list --format=json                                                                                                                                                                                                    │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ pause   │ -p newest-cni-530891 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │                     │
	│ delete  │ -p newest-cni-530891                                                                                                                                                                                                                          │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ delete  │ -p newest-cni-530891                                                                                                                                                                                                                          │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ start   │ -p auto-111074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-111074                  │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │                     │
	│ image   │ no-preload-729957 image list --format=json                                                                                                                                                                                                    │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:26 UTC │ 18 Oct 25 18:26 UTC │
	│ pause   │ -p no-preload-729957 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 18:25:57
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 18:25:57.609163  224323 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:25:57.609392  224323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:25:57.609416  224323 out.go:374] Setting ErrFile to fd 2...
	I1018 18:25:57.609436  224323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:25:57.609711  224323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:25:57.610133  224323 out.go:368] Setting JSON to false
	I1018 18:25:57.611071  224323 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7707,"bootTime":1760804251,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 18:25:57.611158  224323 start.go:141] virtualization:  
	I1018 18:25:57.615064  224323 out.go:179] * [auto-111074] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 18:25:57.619275  224323 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 18:25:57.619342  224323 notify.go:220] Checking for updates...
	I1018 18:25:57.626141  224323 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 18:25:57.629155  224323 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:25:57.632037  224323 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 18:25:57.635011  224323 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 18:25:57.638034  224323 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 18:25:57.641571  224323 config.go:182] Loaded profile config "no-preload-729957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:25:57.641672  224323 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 18:25:57.704477  224323 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 18:25:57.704601  224323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:25:57.836454  224323 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 18:25:57.82641665 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:25:57.836552  224323 docker.go:318] overlay module found
	I1018 18:25:57.839750  224323 out.go:179] * Using the docker driver based on user configuration
	I1018 18:25:57.842672  224323 start.go:305] selected driver: docker
	I1018 18:25:57.842691  224323 start.go:925] validating driver "docker" against <nil>
	I1018 18:25:57.842705  224323 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 18:25:57.843384  224323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:25:57.947183  224323 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 18:25:57.935286064 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:25:57.947336  224323 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 18:25:57.947540  224323 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:25:57.950493  224323 out.go:179] * Using Docker driver with root privileges
	I1018 18:25:57.953450  224323 cni.go:84] Creating CNI manager for ""
	I1018 18:25:57.953520  224323 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:25:57.953532  224323 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 18:25:57.953606  224323 start.go:349] cluster config:
	{Name:auto-111074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-111074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1018 18:25:57.956774  224323 out.go:179] * Starting "auto-111074" primary control-plane node in "auto-111074" cluster
	I1018 18:25:57.959532  224323 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 18:25:57.962389  224323 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 18:25:57.965062  224323 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:25:57.965107  224323 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 18:25:57.965117  224323 cache.go:58] Caching tarball of preloaded images
	I1018 18:25:57.965204  224323 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 18:25:57.965221  224323 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 18:25:57.965332  224323 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/config.json ...
	I1018 18:25:57.965352  224323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/config.json: {Name:mkbb82346508b84aaf227169a59c31534a3f406d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:25:57.965497  224323 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 18:25:57.998536  224323 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 18:25:57.998555  224323 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 18:25:57.998567  224323 cache.go:232] Successfully downloaded all kic artifacts
	I1018 18:25:57.998588  224323 start.go:360] acquireMachinesLock for auto-111074: {Name:mk75369a1a9bfcfe98d7f880f24bb4d102e5b8ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:25:57.998680  224323 start.go:364] duration metric: took 77.088µs to acquireMachinesLock for "auto-111074"
	I1018 18:25:57.998707  224323 start.go:93] Provisioning new machine with config: &{Name:auto-111074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-111074 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 18:25:57.998772  224323 start.go:125] createHost starting for "" (driver="docker")
	I1018 18:25:54.307067  221240 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 18:25:54.307087  221240 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 18:25:54.393328  221240 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 18:25:54.393349  221240 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 18:25:54.466168  221240 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 18:25:54.466185  221240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 18:25:54.519984  221240 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 18:25:54.520006  221240 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 18:25:54.549443  221240 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 18:25:54.549465  221240 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 18:25:54.570273  221240 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 18:25:54.570294  221240 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 18:25:54.617388  221240 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 18:25:54.617409  221240 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 18:25:54.651310  221240 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 18:25:54.651330  221240 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 18:25:54.677285  221240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 18:25:58.002196  224323 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 18:25:58.002471  224323 start.go:159] libmachine.API.Create for "auto-111074" (driver="docker")
	I1018 18:25:58.002517  224323 client.go:168] LocalClient.Create starting
	I1018 18:25:58.002595  224323 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem
	I1018 18:25:58.002628  224323 main.go:141] libmachine: Decoding PEM data...
	I1018 18:25:58.002642  224323 main.go:141] libmachine: Parsing certificate...
	I1018 18:25:58.002706  224323 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem
	I1018 18:25:58.002725  224323 main.go:141] libmachine: Decoding PEM data...
	I1018 18:25:58.002736  224323 main.go:141] libmachine: Parsing certificate...
	I1018 18:25:58.003154  224323 cli_runner.go:164] Run: docker network inspect auto-111074 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 18:25:58.030563  224323 cli_runner.go:211] docker network inspect auto-111074 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 18:25:58.030658  224323 network_create.go:284] running [docker network inspect auto-111074] to gather additional debugging logs...
	I1018 18:25:58.030675  224323 cli_runner.go:164] Run: docker network inspect auto-111074
	W1018 18:25:58.046975  224323 cli_runner.go:211] docker network inspect auto-111074 returned with exit code 1
	I1018 18:25:58.047002  224323 network_create.go:287] error running [docker network inspect auto-111074]: docker network inspect auto-111074: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-111074 not found
	I1018 18:25:58.047023  224323 network_create.go:289] output of [docker network inspect auto-111074]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-111074 not found
	
	** /stderr **
	I1018 18:25:58.047116  224323 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:25:58.063221  224323 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-903568cdf824 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:7a:80:c0:8c:ed} reservation:<nil>}
	I1018 18:25:58.063532  224323 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ee9fcaab9ca8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:a7:65:1b:c0:41} reservation:<nil>}
	I1018 18:25:58.063833  224323 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-414fc11e154b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:86:f0:a8:1a:86:00} reservation:<nil>}
	I1018 18:25:58.064076  224323 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-9171cfee9247 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e6:21:8a:96:2d:4e} reservation:<nil>}
	I1018 18:25:58.064486  224323 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a13100}
	I1018 18:25:58.064505  224323 network_create.go:124] attempt to create docker network auto-111074 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 18:25:58.064569  224323 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-111074 auto-111074
	I1018 18:25:58.151076  224323 network_create.go:108] docker network auto-111074 192.168.85.0/24 created
	I1018 18:25:58.151102  224323 kic.go:121] calculated static IP "192.168.85.2" for the "auto-111074" container
	I1018 18:25:58.151175  224323 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 18:25:58.174584  224323 cli_runner.go:164] Run: docker volume create auto-111074 --label name.minikube.sigs.k8s.io=auto-111074 --label created_by.minikube.sigs.k8s.io=true
	I1018 18:25:58.215861  224323 oci.go:103] Successfully created a docker volume auto-111074
	I1018 18:25:58.215936  224323 cli_runner.go:164] Run: docker run --rm --name auto-111074-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-111074 --entrypoint /usr/bin/test -v auto-111074:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 18:25:58.973948  224323 oci.go:107] Successfully prepared a docker volume auto-111074
	I1018 18:25:58.973995  224323 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:25:58.974030  224323 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 18:25:58.974100  224323 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-111074:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 18:26:02.721104  221240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.573396552s)
	I1018 18:26:02.721160  221240 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.567505809s)
	I1018 18:26:02.721188  221240 node_ready.go:35] waiting up to 6m0s for node "no-preload-729957" to be "Ready" ...
	I1018 18:26:02.721483  221240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.528374311s)
	I1018 18:26:02.775049  221240 node_ready.go:49] node "no-preload-729957" is "Ready"
	I1018 18:26:02.775082  221240 node_ready.go:38] duration metric: took 53.876862ms for node "no-preload-729957" to be "Ready" ...
	I1018 18:26:02.775098  221240 api_server.go:52] waiting for apiserver process to appear ...
	I1018 18:26:02.775176  221240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 18:26:03.197456  221240 api_server.go:72] duration metric: took 9.551715057s to wait for apiserver process to appear ...
	I1018 18:26:03.197479  221240 api_server.go:88] waiting for apiserver healthz status ...
	I1018 18:26:03.197498  221240 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 18:26:03.197816  221240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.520495745s)
	I1018 18:26:03.206414  221240 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 18:26:03.206444  221240 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 18:26:03.207087  221240 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-729957 addons enable metrics-server
	
	I1018 18:26:03.217445  221240 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1018 18:26:03.223723  221240 addons.go:514] duration metric: took 9.577568493s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1018 18:26:03.698973  221240 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 18:26:03.728363  221240 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 18:26:03.729870  221240 api_server.go:141] control plane version: v1.34.1
	I1018 18:26:03.729891  221240 api_server.go:131] duration metric: took 532.404803ms to wait for apiserver health ...
	I1018 18:26:03.729901  221240 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 18:26:03.753759  221240 system_pods.go:59] 8 kube-system pods found
	I1018 18:26:03.753794  221240 system_pods.go:61] "coredns-66bc5c9577-q7mng" [365b51ac-c2aa-4247-a37e-ef5ce5d54a36] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:26:03.753806  221240 system_pods.go:61] "etcd-no-preload-729957" [29023f58-84ea-44ad-b6e8-cc5cf720a4be] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 18:26:03.753815  221240 system_pods.go:61] "kindnet-4hbt7" [6c9fa05f-7c37-442d-b3fa-ee037c865d3e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 18:26:03.753823  221240 system_pods.go:61] "kube-apiserver-no-preload-729957" [ea721a8e-b407-4422-b1c1-dc40032787ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 18:26:03.753832  221240 system_pods.go:61] "kube-controller-manager-no-preload-729957" [bf889e9e-777e-403a-b4ef-3582a86bafbb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 18:26:03.753840  221240 system_pods.go:61] "kube-proxy-75znn" [c6f7e4f1-ccc0-40c5-b449-fb42e743f373] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 18:26:03.753847  221240 system_pods.go:61] "kube-scheduler-no-preload-729957" [fa436526-c2f9-43b9-a48e-57dc63916082] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 18:26:03.753851  221240 system_pods.go:61] "storage-provisioner" [4bef6a17-c67c-4394-837e-c20c6378a6ed] Running
	I1018 18:26:03.753857  221240 system_pods.go:74] duration metric: took 23.943758ms to wait for pod list to return data ...
	I1018 18:26:03.753865  221240 default_sa.go:34] waiting for default service account to be created ...
	I1018 18:26:03.761524  221240 default_sa.go:45] found service account: "default"
	I1018 18:26:03.761547  221240 default_sa.go:55] duration metric: took 7.676349ms for default service account to be created ...
	I1018 18:26:03.761558  221240 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 18:26:03.774696  221240 system_pods.go:86] 8 kube-system pods found
	I1018 18:26:03.774731  221240 system_pods.go:89] "coredns-66bc5c9577-q7mng" [365b51ac-c2aa-4247-a37e-ef5ce5d54a36] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:26:03.774750  221240 system_pods.go:89] "etcd-no-preload-729957" [29023f58-84ea-44ad-b6e8-cc5cf720a4be] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 18:26:03.774757  221240 system_pods.go:89] "kindnet-4hbt7" [6c9fa05f-7c37-442d-b3fa-ee037c865d3e] Running
	I1018 18:26:03.774765  221240 system_pods.go:89] "kube-apiserver-no-preload-729957" [ea721a8e-b407-4422-b1c1-dc40032787ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 18:26:03.774772  221240 system_pods.go:89] "kube-controller-manager-no-preload-729957" [bf889e9e-777e-403a-b4ef-3582a86bafbb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 18:26:03.774778  221240 system_pods.go:89] "kube-proxy-75znn" [c6f7e4f1-ccc0-40c5-b449-fb42e743f373] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 18:26:03.774785  221240 system_pods.go:89] "kube-scheduler-no-preload-729957" [fa436526-c2f9-43b9-a48e-57dc63916082] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 18:26:03.774789  221240 system_pods.go:89] "storage-provisioner" [4bef6a17-c67c-4394-837e-c20c6378a6ed] Running
	I1018 18:26:03.774796  221240 system_pods.go:126] duration metric: took 13.233409ms to wait for k8s-apps to be running ...
	I1018 18:26:03.774805  221240 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 18:26:03.774954  221240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:26:03.794488  221240 system_svc.go:56] duration metric: took 19.673369ms WaitForService to wait for kubelet
	I1018 18:26:03.794515  221240 kubeadm.go:586] duration metric: took 10.148777857s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:26:03.794533  221240 node_conditions.go:102] verifying NodePressure condition ...
	I1018 18:26:03.802993  221240 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 18:26:03.803020  221240 node_conditions.go:123] node cpu capacity is 2
	I1018 18:26:03.803032  221240 node_conditions.go:105] duration metric: took 8.493812ms to run NodePressure ...
	I1018 18:26:03.803045  221240 start.go:241] waiting for startup goroutines ...
	I1018 18:26:03.803053  221240 start.go:246] waiting for cluster config update ...
	I1018 18:26:03.803064  221240 start.go:255] writing updated cluster config ...
	I1018 18:26:03.803408  221240 ssh_runner.go:195] Run: rm -f paused
	I1018 18:26:03.810508  221240 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:26:03.815687  221240 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q7mng" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:26:03.675174  224323 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-111074:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.701023076s)
	I1018 18:26:03.675275  224323 kic.go:203] duration metric: took 4.70125498s to extract preloaded images to volume ...
	W1018 18:26:03.675417  224323 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 18:26:03.675530  224323 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 18:26:03.788240  224323 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-111074 --name auto-111074 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-111074 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-111074 --network auto-111074 --ip 192.168.85.2 --volume auto-111074:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 18:26:04.116770  224323 cli_runner.go:164] Run: docker container inspect auto-111074 --format={{.State.Running}}
	I1018 18:26:04.137918  224323 cli_runner.go:164] Run: docker container inspect auto-111074 --format={{.State.Status}}
	I1018 18:26:04.162125  224323 cli_runner.go:164] Run: docker exec auto-111074 stat /var/lib/dpkg/alternatives/iptables
	I1018 18:26:04.219459  224323 oci.go:144] the created container "auto-111074" has a running status.
	I1018 18:26:04.219505  224323 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/auto-111074/id_rsa...
	I1018 18:26:05.302290  224323 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-2509/.minikube/machines/auto-111074/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 18:26:05.327910  224323 cli_runner.go:164] Run: docker container inspect auto-111074 --format={{.State.Status}}
	I1018 18:26:05.346531  224323 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 18:26:05.346554  224323 kic_runner.go:114] Args: [docker exec --privileged auto-111074 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 18:26:05.389711  224323 cli_runner.go:164] Run: docker container inspect auto-111074 --format={{.State.Status}}
	I1018 18:26:05.406668  224323 machine.go:93] provisionDockerMachine start ...
	I1018 18:26:05.406794  224323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-111074
	I1018 18:26:05.425480  224323 main.go:141] libmachine: Using SSH client type: native
	I1018 18:26:05.425836  224323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1018 18:26:05.425857  224323 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 18:26:05.426484  224323 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56442->127.0.0.1:33093: read: connection reset by peer
	W1018 18:26:05.821867  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	W1018 18:26:07.823007  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	I1018 18:26:08.592684  224323 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-111074
	
	I1018 18:26:08.592761  224323 ubuntu.go:182] provisioning hostname "auto-111074"
	I1018 18:26:08.592878  224323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-111074
	I1018 18:26:08.617878  224323 main.go:141] libmachine: Using SSH client type: native
	I1018 18:26:08.618188  224323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1018 18:26:08.618200  224323 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-111074 && echo "auto-111074" | sudo tee /etc/hostname
	I1018 18:26:08.794584  224323 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-111074
	
	I1018 18:26:08.794741  224323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-111074
	I1018 18:26:08.832863  224323 main.go:141] libmachine: Using SSH client type: native
	I1018 18:26:08.833294  224323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1018 18:26:08.833316  224323 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-111074' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-111074/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-111074' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 18:26:08.989204  224323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 18:26:08.989281  224323 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 18:26:08.989351  224323 ubuntu.go:190] setting up certificates
	I1018 18:26:08.989386  224323 provision.go:84] configureAuth start
	I1018 18:26:08.989492  224323 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-111074
	I1018 18:26:09.014400  224323 provision.go:143] copyHostCerts
	I1018 18:26:09.014459  224323 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 18:26:09.014469  224323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 18:26:09.014542  224323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 18:26:09.014655  224323 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 18:26:09.014661  224323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 18:26:09.014690  224323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 18:26:09.014744  224323 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 18:26:09.014749  224323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 18:26:09.014771  224323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 18:26:09.014832  224323 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.auto-111074 san=[127.0.0.1 192.168.85.2 auto-111074 localhost minikube]
	I1018 18:26:09.412351  224323 provision.go:177] copyRemoteCerts
	I1018 18:26:09.412430  224323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 18:26:09.412477  224323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-111074
	I1018 18:26:09.438714  224323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/auto-111074/id_rsa Username:docker}
	I1018 18:26:09.557682  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1018 18:26:09.579667  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 18:26:09.601410  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 18:26:09.622889  224323 provision.go:87] duration metric: took 633.467396ms to configureAuth
	I1018 18:26:09.622913  224323 ubuntu.go:206] setting minikube options for container-runtime
	I1018 18:26:09.623095  224323 config.go:182] Loaded profile config "auto-111074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:26:09.623197  224323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-111074
	I1018 18:26:09.649200  224323 main.go:141] libmachine: Using SSH client type: native
	I1018 18:26:09.649520  224323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1018 18:26:09.649543  224323 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 18:26:09.964320  224323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 18:26:09.964347  224323 machine.go:96] duration metric: took 4.557654213s to provisionDockerMachine
	I1018 18:26:09.964357  224323 client.go:171] duration metric: took 11.961834302s to LocalClient.Create
	I1018 18:26:09.964372  224323 start.go:167] duration metric: took 11.961904145s to libmachine.API.Create "auto-111074"
	I1018 18:26:09.964379  224323 start.go:293] postStartSetup for "auto-111074" (driver="docker")
	I1018 18:26:09.964388  224323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 18:26:09.964462  224323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 18:26:09.964512  224323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-111074
	I1018 18:26:09.994373  224323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/auto-111074/id_rsa Username:docker}
	I1018 18:26:10.106333  224323 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 18:26:10.111401  224323 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 18:26:10.111433  224323 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 18:26:10.111445  224323 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 18:26:10.111504  224323 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 18:26:10.111591  224323 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 18:26:10.111726  224323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 18:26:10.120862  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:26:10.146009  224323 start.go:296] duration metric: took 181.61568ms for postStartSetup
	I1018 18:26:10.149576  224323 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-111074
	I1018 18:26:10.170567  224323 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/config.json ...
	I1018 18:26:10.170871  224323 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 18:26:10.170922  224323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-111074
	I1018 18:26:10.196832  224323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/auto-111074/id_rsa Username:docker}
	I1018 18:26:10.306232  224323 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 18:26:10.311492  224323 start.go:128] duration metric: took 12.312705043s to createHost
	I1018 18:26:10.311513  224323 start.go:83] releasing machines lock for "auto-111074", held for 12.312825192s
	I1018 18:26:10.311583  224323 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-111074
	I1018 18:26:10.334307  224323 ssh_runner.go:195] Run: cat /version.json
	I1018 18:26:10.334365  224323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-111074
	I1018 18:26:10.334605  224323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 18:26:10.334661  224323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-111074
	I1018 18:26:10.370171  224323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/auto-111074/id_rsa Username:docker}
	I1018 18:26:10.370433  224323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/auto-111074/id_rsa Username:docker}
	I1018 18:26:10.493052  224323 ssh_runner.go:195] Run: systemctl --version
	I1018 18:26:10.597987  224323 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 18:26:10.665641  224323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 18:26:10.672264  224323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 18:26:10.672410  224323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 18:26:10.712083  224323 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 18:26:10.712114  224323 start.go:495] detecting cgroup driver to use...
	I1018 18:26:10.712147  224323 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 18:26:10.712209  224323 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 18:26:10.736157  224323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 18:26:10.756415  224323 docker.go:218] disabling cri-docker service (if available) ...
	I1018 18:26:10.756477  224323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 18:26:10.777168  224323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 18:26:10.807172  224323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 18:26:10.984041  224323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 18:26:11.145532  224323 docker.go:234] disabling docker service ...
	I1018 18:26:11.145672  224323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 18:26:11.177807  224323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 18:26:11.193063  224323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 18:26:11.348246  224323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 18:26:11.509912  224323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 18:26:11.523879  224323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 18:26:11.540287  224323 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 18:26:11.540352  224323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:26:11.550213  224323 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 18:26:11.550283  224323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:26:11.559662  224323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:26:11.568869  224323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:26:11.578837  224323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 18:26:11.588416  224323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:26:11.608740  224323 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:26:11.622291  224323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:26:11.634232  224323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 18:26:11.641619  224323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 18:26:11.650371  224323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:26:11.800781  224323 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 18:26:12.297583  224323 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 18:26:12.297657  224323 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 18:26:12.306661  224323 start.go:563] Will wait 60s for crictl version
	I1018 18:26:12.306725  224323 ssh_runner.go:195] Run: which crictl
	I1018 18:26:12.311273  224323 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 18:26:12.363301  224323 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 18:26:12.363449  224323 ssh_runner.go:195] Run: crio --version
	I1018 18:26:12.426960  224323 ssh_runner.go:195] Run: crio --version
	I1018 18:26:12.470676  224323 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 18:26:12.473683  224323 cli_runner.go:164] Run: docker network inspect auto-111074 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:26:12.491437  224323 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 18:26:12.495816  224323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:26:12.506220  224323 kubeadm.go:883] updating cluster {Name:auto-111074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-111074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 18:26:12.506333  224323 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:26:12.506387  224323 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:26:12.543235  224323 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:26:12.543257  224323 crio.go:433] Images already preloaded, skipping extraction
	I1018 18:26:12.543319  224323 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:26:12.580200  224323 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:26:12.580271  224323 cache_images.go:85] Images are preloaded, skipping loading
	I1018 18:26:12.580293  224323 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 18:26:12.580422  224323 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-111074 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-111074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 18:26:12.580538  224323 ssh_runner.go:195] Run: crio config
	W1018 18:26:10.326349  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	W1018 18:26:12.326555  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	I1018 18:26:12.671128  224323 cni.go:84] Creating CNI manager for ""
	I1018 18:26:12.671199  224323 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:26:12.671235  224323 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 18:26:12.671285  224323 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-111074 NodeName:auto-111074 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 18:26:12.671482  224323 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-111074"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 18:26:12.671595  224323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 18:26:12.679886  224323 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 18:26:12.680005  224323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 18:26:12.688197  224323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1018 18:26:12.702690  224323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 18:26:12.716421  224323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1018 18:26:12.730143  224323 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 18:26:12.734392  224323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:26:12.749394  224323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:26:12.916542  224323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:26:12.935174  224323 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074 for IP: 192.168.85.2
	I1018 18:26:12.935251  224323 certs.go:195] generating shared ca certs ...
	I1018 18:26:12.935283  224323 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:26:12.935455  224323 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 18:26:12.935536  224323 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 18:26:12.935564  224323 certs.go:257] generating profile certs ...
	I1018 18:26:12.935650  224323 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/client.key
	I1018 18:26:12.935698  224323 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/client.crt with IP's: []
	I1018 18:26:13.288623  224323 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/client.crt ...
	I1018 18:26:13.288694  224323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/client.crt: {Name:mk474827a1ed79c079e368d33137d842f0296147 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:26:13.288950  224323 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/client.key ...
	I1018 18:26:13.288989  224323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/client.key: {Name:mk2f9cd4544e154adffdb5adb992c48be1817caa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:26:13.289128  224323 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/apiserver.key.861b719e
	I1018 18:26:13.289170  224323 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/apiserver.crt.861b719e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1018 18:26:13.759868  224323 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/apiserver.crt.861b719e ...
	I1018 18:26:13.759895  224323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/apiserver.crt.861b719e: {Name:mk41aca77eff0d161b4af3f3692bda3a4f33d81b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:26:13.760137  224323 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/apiserver.key.861b719e ...
	I1018 18:26:13.760151  224323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/apiserver.key.861b719e: {Name:mk7dd7a3d6cf6c973bf618351a13a78d20d534b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:26:13.760227  224323 certs.go:382] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/apiserver.crt.861b719e -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/apiserver.crt
	I1018 18:26:13.760300  224323 certs.go:386] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/apiserver.key.861b719e -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/apiserver.key
	I1018 18:26:13.760351  224323 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/proxy-client.key
	I1018 18:26:13.760363  224323 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/proxy-client.crt with IP's: []
	I1018 18:26:14.660452  224323 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/proxy-client.crt ...
	I1018 18:26:14.660481  224323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/proxy-client.crt: {Name:mkbe5243faf10ae8c3dc239ca34f754fdd391948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:26:14.660647  224323 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/proxy-client.key ...
	I1018 18:26:14.660661  224323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/proxy-client.key: {Name:mk694638a1c79b9529d90c102bbc84f9dc4c7fb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:26:14.660835  224323 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 18:26:14.660877  224323 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 18:26:14.660890  224323 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 18:26:14.660916  224323 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 18:26:14.660960  224323 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 18:26:14.660996  224323 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 18:26:14.661044  224323 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:26:14.661622  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 18:26:14.683484  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 18:26:14.702621  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 18:26:14.721020  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 18:26:14.756236  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1018 18:26:14.807965  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 18:26:14.830459  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 18:26:14.848762  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 18:26:14.866520  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 18:26:14.883946  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 18:26:14.901688  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 18:26:14.919785  224323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 18:26:14.932779  224323 ssh_runner.go:195] Run: openssl version
	I1018 18:26:14.939957  224323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 18:26:14.948520  224323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:26:14.952563  224323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:26:14.952636  224323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:26:14.993679  224323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 18:26:15.002016  224323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 18:26:15.012310  224323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 18:26:15.018033  224323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 18:26:15.018121  224323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 18:26:15.061247  224323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 18:26:15.070303  224323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 18:26:15.079640  224323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 18:26:15.084416  224323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 18:26:15.084499  224323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 18:26:15.127025  224323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 18:26:15.135884  224323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 18:26:15.140357  224323 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 18:26:15.140409  224323 kubeadm.go:400] StartCluster: {Name:auto-111074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-111074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:26:15.140485  224323 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 18:26:15.140551  224323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 18:26:15.169721  224323 cri.go:89] found id: ""
	I1018 18:26:15.169789  224323 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 18:26:15.179666  224323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 18:26:15.188048  224323 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 18:26:15.188114  224323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 18:26:15.198453  224323 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 18:26:15.198482  224323 kubeadm.go:157] found existing configuration files:
	
	I1018 18:26:15.198531  224323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 18:26:15.206784  224323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 18:26:15.206853  224323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 18:26:15.214381  224323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 18:26:15.222248  224323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 18:26:15.222313  224323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 18:26:15.229766  224323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 18:26:15.238050  224323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 18:26:15.238120  224323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 18:26:15.245567  224323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 18:26:15.254379  224323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 18:26:15.254441  224323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 18:26:15.264256  224323 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 18:26:15.336831  224323 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 18:26:15.337373  224323 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 18:26:15.361584  224323 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 18:26:15.361665  224323 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 18:26:15.361709  224323 kubeadm.go:318] OS: Linux
	I1018 18:26:15.361761  224323 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 18:26:15.361816  224323 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 18:26:15.361869  224323 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 18:26:15.361923  224323 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 18:26:15.361977  224323 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 18:26:15.362035  224323 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 18:26:15.362086  224323 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 18:26:15.362141  224323 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 18:26:15.362192  224323 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 18:26:15.484530  224323 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 18:26:15.484657  224323 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 18:26:15.484772  224323 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 18:26:15.496834  224323 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 18:26:15.504481  224323 out.go:252]   - Generating certificates and keys ...
	I1018 18:26:15.504577  224323 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 18:26:15.504661  224323 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 18:26:15.925815  224323 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 18:26:16.179151  224323 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 18:26:16.887629  224323 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 18:26:17.079724  224323 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	W1018 18:26:14.822799  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	W1018 18:26:17.322126  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	I1018 18:26:17.687268  224323 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 18:26:17.687786  224323 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-111074 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 18:26:18.606076  224323 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 18:26:18.606691  224323 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-111074 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 18:26:18.875886  224323 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 18:26:19.111440  224323 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 18:26:20.131362  224323 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 18:26:20.131691  224323 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 18:26:21.346817  224323 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 18:26:22.004898  224323 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	W1018 18:26:19.823747  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	W1018 18:26:21.824111  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	I1018 18:26:22.879275  224323 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 18:26:24.154575  224323 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 18:26:24.863902  224323 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 18:26:24.864622  224323 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 18:26:24.867137  224323 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 18:26:24.870862  224323 out.go:252]   - Booting up control plane ...
	I1018 18:26:24.870969  224323 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 18:26:24.871050  224323 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 18:26:24.871120  224323 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 18:26:24.900404  224323 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 18:26:24.900757  224323 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 18:26:24.907814  224323 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 18:26:24.908130  224323 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 18:26:24.908178  224323 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 18:26:25.048897  224323 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 18:26:25.049047  224323 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 18:26:27.554188  224323 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.501485681s
	I1018 18:26:27.554311  224323 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 18:26:27.554397  224323 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1018 18:26:27.554490  224323 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 18:26:27.554571  224323 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1018 18:26:24.321733  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	W1018 18:26:26.323672  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	W1018 18:26:28.832067  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	I1018 18:26:30.405395  224323 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.851245248s
	W1018 18:26:31.321247  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	W1018 18:26:33.322726  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	I1018 18:26:33.459212  224323 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.905272777s
	I1018 18:26:34.555198  224323 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.001321627s
	I1018 18:26:34.578525  224323 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 18:26:34.598306  224323 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 18:26:34.619606  224323 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 18:26:34.620032  224323 kubeadm.go:318] [mark-control-plane] Marking the node auto-111074 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 18:26:34.638455  224323 kubeadm.go:318] [bootstrap-token] Using token: 8ryud3.ooab0ywsri4uwu90
	I1018 18:26:34.641357  224323 out.go:252]   - Configuring RBAC rules ...
	I1018 18:26:34.641490  224323 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 18:26:34.658582  224323 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 18:26:34.669155  224323 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 18:26:34.674907  224323 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 18:26:34.685996  224323 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 18:26:34.692628  224323 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 18:26:34.972881  224323 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 18:26:35.387582  224323 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 18:26:35.963099  224323 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 18:26:35.964255  224323 kubeadm.go:318] 
	I1018 18:26:35.964342  224323 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 18:26:35.964352  224323 kubeadm.go:318] 
	I1018 18:26:35.964433  224323 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 18:26:35.964442  224323 kubeadm.go:318] 
	I1018 18:26:35.964468  224323 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 18:26:35.964534  224323 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 18:26:35.964599  224323 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 18:26:35.964609  224323 kubeadm.go:318] 
	I1018 18:26:35.964665  224323 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 18:26:35.964674  224323 kubeadm.go:318] 
	I1018 18:26:35.964724  224323 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 18:26:35.964732  224323 kubeadm.go:318] 
	I1018 18:26:35.964786  224323 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 18:26:35.964871  224323 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 18:26:35.964977  224323 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 18:26:35.964989  224323 kubeadm.go:318] 
	I1018 18:26:35.965078  224323 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 18:26:35.965166  224323 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 18:26:35.965175  224323 kubeadm.go:318] 
	I1018 18:26:35.965262  224323 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 8ryud3.ooab0ywsri4uwu90 \
	I1018 18:26:35.965374  224323 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d0244c5bf86cdf97546c6a22045cb6ed9d7ead524d9c98d9ca35da77d5d7a04d \
	I1018 18:26:35.965399  224323 kubeadm.go:318] 	--control-plane 
	I1018 18:26:35.965407  224323 kubeadm.go:318] 
	I1018 18:26:35.965495  224323 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 18:26:35.965503  224323 kubeadm.go:318] 
	I1018 18:26:35.965593  224323 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 8ryud3.ooab0ywsri4uwu90 \
	I1018 18:26:35.965723  224323 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d0244c5bf86cdf97546c6a22045cb6ed9d7ead524d9c98d9ca35da77d5d7a04d 
	I1018 18:26:35.971248  224323 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 18:26:35.971486  224323 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 18:26:35.971602  224323 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 18:26:35.971621  224323 cni.go:84] Creating CNI manager for ""
	I1018 18:26:35.971629  224323 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:26:35.976648  224323 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 18:26:34.838899  221240 pod_ready.go:94] pod "coredns-66bc5c9577-q7mng" is "Ready"
	I1018 18:26:34.838924  221240 pod_ready.go:86] duration metric: took 31.023214459s for pod "coredns-66bc5c9577-q7mng" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:26:34.842319  221240 pod_ready.go:83] waiting for pod "etcd-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:26:34.847723  221240 pod_ready.go:94] pod "etcd-no-preload-729957" is "Ready"
	I1018 18:26:34.847750  221240 pod_ready.go:86] duration metric: took 5.404484ms for pod "etcd-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:26:34.850288  221240 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:26:34.854731  221240 pod_ready.go:94] pod "kube-apiserver-no-preload-729957" is "Ready"
	I1018 18:26:34.854757  221240 pod_ready.go:86] duration metric: took 4.442642ms for pod "kube-apiserver-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:26:34.856946  221240 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:26:35.020925  221240 pod_ready.go:94] pod "kube-controller-manager-no-preload-729957" is "Ready"
	I1018 18:26:35.021028  221240 pod_ready.go:86] duration metric: took 164.050245ms for pod "kube-controller-manager-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:26:35.219666  221240 pod_ready.go:83] waiting for pod "kube-proxy-75znn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:26:35.619633  221240 pod_ready.go:94] pod "kube-proxy-75znn" is "Ready"
	I1018 18:26:35.619706  221240 pod_ready.go:86] duration metric: took 399.945845ms for pod "kube-proxy-75znn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:26:35.818878  221240 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:26:36.219793  221240 pod_ready.go:94] pod "kube-scheduler-no-preload-729957" is "Ready"
	I1018 18:26:36.219871  221240 pod_ready.go:86] duration metric: took 400.95865ms for pod "kube-scheduler-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:26:36.219898  221240 pod_ready.go:40] duration metric: took 32.409356227s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:26:36.288922  221240 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 18:26:36.292160  221240 out.go:179] * Done! kubectl is now configured to use "no-preload-729957" cluster and "default" namespace by default
	I1018 18:26:35.979707  224323 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 18:26:35.986511  224323 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 18:26:35.986532  224323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 18:26:36.011239  224323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 18:26:36.992453  224323 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 18:26:36.992627  224323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:26:36.992714  224323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-111074 minikube.k8s.io/updated_at=2025_10_18T18_26_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404 minikube.k8s.io/name=auto-111074 minikube.k8s.io/primary=true
	I1018 18:26:37.180863  224323 ops.go:34] apiserver oom_adj: -16
	I1018 18:26:37.181001  224323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:26:37.681712  224323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:26:38.181084  224323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:26:38.681864  224323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:26:39.181162  224323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:26:39.681884  224323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:26:40.181103  224323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:26:40.681989  224323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:26:40.864312  224323 kubeadm.go:1113] duration metric: took 3.871741723s to wait for elevateKubeSystemPrivileges
	I1018 18:26:40.864344  224323 kubeadm.go:402] duration metric: took 25.723938625s to StartCluster
	I1018 18:26:40.864361  224323 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:26:40.864421  224323 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:26:40.865404  224323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:26:40.865637  224323 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 18:26:40.865739  224323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 18:26:40.865995  224323 config.go:182] Loaded profile config "auto-111074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:26:40.865971  224323 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 18:26:40.866063  224323 addons.go:69] Setting storage-provisioner=true in profile "auto-111074"
	I1018 18:26:40.866079  224323 addons.go:69] Setting default-storageclass=true in profile "auto-111074"
	I1018 18:26:40.866084  224323 addons.go:238] Setting addon storage-provisioner=true in "auto-111074"
	I1018 18:26:40.866095  224323 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-111074"
	I1018 18:26:40.866108  224323 host.go:66] Checking if "auto-111074" exists ...
	I1018 18:26:40.866405  224323 cli_runner.go:164] Run: docker container inspect auto-111074 --format={{.State.Status}}
	I1018 18:26:40.866594  224323 cli_runner.go:164] Run: docker container inspect auto-111074 --format={{.State.Status}}
	I1018 18:26:40.871435  224323 out.go:179] * Verifying Kubernetes components...
	I1018 18:26:40.876200  224323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:26:40.907654  224323 addons.go:238] Setting addon default-storageclass=true in "auto-111074"
	I1018 18:26:40.907691  224323 host.go:66] Checking if "auto-111074" exists ...
	I1018 18:26:40.908095  224323 cli_runner.go:164] Run: docker container inspect auto-111074 --format={{.State.Status}}
	I1018 18:26:40.911154  224323 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:26:40.916309  224323 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:26:40.916340  224323 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 18:26:40.916406  224323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-111074
	I1018 18:26:40.953595  224323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/auto-111074/id_rsa Username:docker}
	I1018 18:26:40.954190  224323 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 18:26:40.954211  224323 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 18:26:40.954271  224323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-111074
	I1018 18:26:40.986211  224323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/auto-111074/id_rsa Username:docker}
	I1018 18:26:41.214567  224323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 18:26:41.227892  224323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:26:41.257710  224323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:26:41.285897  224323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 18:26:41.801481  224323 node_ready.go:35] waiting up to 15m0s for node "auto-111074" to be "Ready" ...
	I1018 18:26:41.802391  224323 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1018 18:26:42.062919  224323 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 18:26:42.065836  224323 addons.go:514] duration metric: took 1.199856343s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 18:26:42.306785  224323 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-111074" context rescaled to 1 replicas
	W1018 18:26:43.804501  224323 node_ready.go:57] node "auto-111074" has "Ready":"False" status (will retry)
	W1018 18:26:45.805193  224323 node_ready.go:57] node "auto-111074" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 18 18:26:28 no-preload-729957 crio[647]: time="2025-10-18T18:26:28.828545196Z" level=info msg="Removed container 746019b0ec9b8fbc991561dd0fafcc67d3592ed100d0dd188df361fec5595531: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2jw6d/dashboard-metrics-scraper" id=5680264c-587f-4d6d-8f6b-19f70c30faf2 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 18:26:33 no-preload-729957 conmon[1158]: conmon 0fa470fa642abc5faf16 <ninfo>: container 1161 exited with status 1
	Oct 18 18:26:33 no-preload-729957 crio[647]: time="2025-10-18T18:26:33.81268602Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0b1deab0-631c-407d-8118-b91d5171fb57 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:26:33 no-preload-729957 crio[647]: time="2025-10-18T18:26:33.813866433Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=98f17027-1ff0-41a4-8928-d052babcfe56 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:26:33 no-preload-729957 crio[647]: time="2025-10-18T18:26:33.817358309Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=effb10e4-e923-42fd-89bf-f4d99794a40a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:26:33 no-preload-729957 crio[647]: time="2025-10-18T18:26:33.81761099Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:26:33 no-preload-729957 crio[647]: time="2025-10-18T18:26:33.832428058Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:26:33 no-preload-729957 crio[647]: time="2025-10-18T18:26:33.832610502Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0beac2733dfba336b2e80eec0a88ae0bdee5db7e7c971cb9557db83be595a6e0/merged/etc/passwd: no such file or directory"
	Oct 18 18:26:33 no-preload-729957 crio[647]: time="2025-10-18T18:26:33.832633772Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0beac2733dfba336b2e80eec0a88ae0bdee5db7e7c971cb9557db83be595a6e0/merged/etc/group: no such file or directory"
	Oct 18 18:26:33 no-preload-729957 crio[647]: time="2025-10-18T18:26:33.832885057Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:26:33 no-preload-729957 crio[647]: time="2025-10-18T18:26:33.87785449Z" level=info msg="Created container f8531bead1ef2e3eada16cba59c589956dc484ab1cff2f47a411a5f3dbd97427: kube-system/storage-provisioner/storage-provisioner" id=effb10e4-e923-42fd-89bf-f4d99794a40a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:26:33 no-preload-729957 crio[647]: time="2025-10-18T18:26:33.879509871Z" level=info msg="Starting container: f8531bead1ef2e3eada16cba59c589956dc484ab1cff2f47a411a5f3dbd97427" id=bae67397-a492-4ee0-b3e6-9fa8a11526cb name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:26:33 no-preload-729957 crio[647]: time="2025-10-18T18:26:33.888000746Z" level=info msg="Started container" PID=1631 containerID=f8531bead1ef2e3eada16cba59c589956dc484ab1cff2f47a411a5f3dbd97427 description=kube-system/storage-provisioner/storage-provisioner id=bae67397-a492-4ee0-b3e6-9fa8a11526cb name=/runtime.v1.RuntimeService/StartContainer sandboxID=e61150382859edd233bddf3dac5345409f9f681c0873b33012c2409ff14a3372
	Oct 18 18:26:43 no-preload-729957 crio[647]: time="2025-10-18T18:26:43.412029114Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:26:43 no-preload-729957 crio[647]: time="2025-10-18T18:26:43.418417201Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:26:43 no-preload-729957 crio[647]: time="2025-10-18T18:26:43.418454584Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:26:43 no-preload-729957 crio[647]: time="2025-10-18T18:26:43.418477378Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:26:43 no-preload-729957 crio[647]: time="2025-10-18T18:26:43.421731327Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:26:43 no-preload-729957 crio[647]: time="2025-10-18T18:26:43.421772641Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:26:43 no-preload-729957 crio[647]: time="2025-10-18T18:26:43.421796099Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:26:43 no-preload-729957 crio[647]: time="2025-10-18T18:26:43.425093077Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:26:43 no-preload-729957 crio[647]: time="2025-10-18T18:26:43.425128992Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:26:43 no-preload-729957 crio[647]: time="2025-10-18T18:26:43.425153312Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:26:43 no-preload-729957 crio[647]: time="2025-10-18T18:26:43.428047864Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:26:43 no-preload-729957 crio[647]: time="2025-10-18T18:26:43.428080554Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f8531bead1ef2       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           17 seconds ago      Running             storage-provisioner         2                   e61150382859e       storage-provisioner                          kube-system
	471127644b325       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   3a4d29ccc9199       dashboard-metrics-scraper-6ffb444bf9-2jw6d   kubernetes-dashboard
	7624f6abd4598       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   32 seconds ago      Running             kubernetes-dashboard        0                   7333b2aa64d8d       kubernetes-dashboard-855c9754f9-dq5cz        kubernetes-dashboard
	0fa470fa642ab       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           48 seconds ago      Exited              storage-provisioner         1                   e61150382859e       storage-provisioner                          kube-system
	5f5f900bb1fe2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           48 seconds ago      Running             coredns                     1                   6e2d96f08ec51       coredns-66bc5c9577-q7mng                     kube-system
	8cbe09ef34bf4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           48 seconds ago      Running             kindnet-cni                 1                   fb051ff6319a9       kindnet-4hbt7                                kube-system
	2a3d029247f0f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           48 seconds ago      Running             kube-proxy                  1                   9523085770ded       kube-proxy-75znn                             kube-system
	fc5063529049a       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   642da68c76e5a       busybox                                      default
	a51a8b9c45aa1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           57 seconds ago      Running             kube-apiserver              1                   ac5622478a1f2       kube-apiserver-no-preload-729957             kube-system
	6974399a43a07       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           57 seconds ago      Running             kube-controller-manager     1                   c31990e6c5794       kube-controller-manager-no-preload-729957    kube-system
	d2a6df964e5a2       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           57 seconds ago      Running             kube-scheduler              1                   e24d20275da4a       kube-scheduler-no-preload-729957             kube-system
	b42f50a512a46       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           57 seconds ago      Running             etcd                        1                   fba33013edfbf       etcd-no-preload-729957                       kube-system
	
	
	==> coredns [5f5f900bb1fe229f3538acdd9a0c3aad246dff6301bfc684afdfd990ab97fe94] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57117 - 7152 "HINFO IN 3496716410604839321.5072036916165056099. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016380576s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-729957
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-729957
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=no-preload-729957
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T18_24_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 18:24:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-729957
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 18:26:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 18:26:31 +0000   Sat, 18 Oct 2025 18:24:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 18:26:31 +0000   Sat, 18 Oct 2025 18:24:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 18:26:31 +0000   Sat, 18 Oct 2025 18:24:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 18:26:31 +0000   Sat, 18 Oct 2025 18:25:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-729957
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                767ca1b7-c7ba-48aa-bccb-3679302b1946
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-q7mng                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     111s
	  kube-system                 etcd-no-preload-729957                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         116s
	  kube-system                 kindnet-4hbt7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-no-preload-729957              250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-729957     200m (10%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-75znn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-no-preload-729957              100m (5%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-2jw6d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dq5cz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 109s                 kube-proxy       
	  Normal   Starting                 47s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node no-preload-729957 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node no-preload-729957 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m8s (x8 over 2m8s)  kubelet          Node no-preload-729957 status is now: NodeHasSufficientPID
	  Normal   Starting                 117s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 117s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    116s                 kubelet          Node no-preload-729957 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     116s                 kubelet          Node no-preload-729957 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  116s                 kubelet          Node no-preload-729957 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           113s                 node-controller  Node no-preload-729957 event: Registered Node no-preload-729957 in Controller
	  Normal   NodeReady                95s                  kubelet          Node no-preload-729957 status is now: NodeReady
	  Normal   Starting                 59s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node no-preload-729957 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node no-preload-729957 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 59s)    kubelet          Node no-preload-729957 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           46s                  node-controller  Node no-preload-729957 event: Registered Node no-preload-729957 in Controller
	
	
	==> dmesg <==
	[ +25.128760] overlayfs: idmapped layers are currently not supported
	[Oct18 18:06] overlayfs: idmapped layers are currently not supported
	[Oct18 18:07] overlayfs: idmapped layers are currently not supported
	[Oct18 18:08] overlayfs: idmapped layers are currently not supported
	[Oct18 18:09] overlayfs: idmapped layers are currently not supported
	[Oct18 18:11] overlayfs: idmapped layers are currently not supported
	[Oct18 18:13] overlayfs: idmapped layers are currently not supported
	[ +30.969240] overlayfs: idmapped layers are currently not supported
	[Oct18 18:15] overlayfs: idmapped layers are currently not supported
	[Oct18 18:16] overlayfs: idmapped layers are currently not supported
	[Oct18 18:17] overlayfs: idmapped layers are currently not supported
	[ +23.167826] overlayfs: idmapped layers are currently not supported
	[Oct18 18:18] overlayfs: idmapped layers are currently not supported
	[ +38.509809] overlayfs: idmapped layers are currently not supported
	[Oct18 18:19] overlayfs: idmapped layers are currently not supported
	[Oct18 18:21] overlayfs: idmapped layers are currently not supported
	[Oct18 18:22] overlayfs: idmapped layers are currently not supported
	[Oct18 18:23] overlayfs: idmapped layers are currently not supported
	[ +30.822562] overlayfs: idmapped layers are currently not supported
	[Oct18 18:24] bpfilter: read fail -512
	[ +10.607871] overlayfs: idmapped layers are currently not supported
	[Oct18 18:25] overlayfs: idmapped layers are currently not supported
	[ +26.762544] overlayfs: idmapped layers are currently not supported
	[ +14.684259] overlayfs: idmapped layers are currently not supported
	[Oct18 18:26] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b42f50a512a46ed3a6cad329c67f5e35b5354a294a55db5944cfbd20dd29cbd2] <==
	{"level":"warn","ts":"2025-10-18T18:25:57.844141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:57.867449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:57.886017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:57.913571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:57.928995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:57.960198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:57.978332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:57.991503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.017317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.039726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.067010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.103789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.137399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.169435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.187073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.230324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.238317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.272472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.299936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.356103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.448285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49372","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T18:26:01.835593Z","caller":"traceutil/trace.go:172","msg":"trace[1702060539] transaction","detail":"{read_only:false; response_revision:511; number_of_response:1; }","duration":"134.592519ms","start":"2025-10-18T18:26:01.700878Z","end":"2025-10-18T18:26:01.835470Z","steps":["trace[1702060539] 'process raft request'  (duration: 88.333191ms)","trace[1702060539] 'compare'  (duration: 46.017183ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T18:26:02.704686Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.282726ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:pv-protection-controller\" limit:1 ","response":"range_response_count:1 size:693"}
	{"level":"info","ts":"2025-10-18T18:26:02.704753Z","caller":"traceutil/trace.go:172","msg":"trace[1533455430] range","detail":"{range_begin:/registry/clusterroles/system:controller:pv-protection-controller; range_end:; response_count:1; response_revision:530; }","duration":"100.383348ms","start":"2025-10-18T18:26:02.604355Z","end":"2025-10-18T18:26:02.704739Z","steps":["trace[1533455430] 'agreement among raft nodes before linearized reading'  (duration: 40.945651ms)","trace[1533455430] 'range keys from in-memory index tree'  (duration: 59.253711ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T18:26:02.705007Z","caller":"traceutil/trace.go:172","msg":"trace[941557549] transaction","detail":"{read_only:false; response_revision:531; number_of_response:1; }","duration":"100.679499ms","start":"2025-10-18T18:26:02.604315Z","end":"2025-10-18T18:26:02.704994Z","steps":["trace[941557549] 'process raft request'  (duration: 40.952437ms)","trace[941557549] 'compare'  (duration: 59.221267ms)"],"step_count":2}
	
	
	==> kernel <==
	 18:26:51 up  2:09,  0 user,  load average: 3.22, 3.35, 2.95
	Linux no-preload-729957 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8cbe09ef34bf4eef049fd6b1f047b6b1c569dbdbffc529ac4e6883ef231e2b93] <==
	I1018 18:26:03.114241       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 18:26:03.114853       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 18:26:03.114995       1 main.go:148] setting mtu 1500 for CNI 
	I1018 18:26:03.115014       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 18:26:03.115024       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T18:26:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 18:26:03.406740       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 18:26:03.406822       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 18:26:03.406854       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 18:26:03.407159       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 18:26:33.407353       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 18:26:33.407564       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 18:26:33.407647       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 18:26:33.407723       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 18:26:35.007154       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 18:26:35.007199       1 metrics.go:72] Registering metrics
	I1018 18:26:35.007284       1 controller.go:711] "Syncing nftables rules"
	I1018 18:26:43.411723       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 18:26:43.411777       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a51a8b9c45aa1dd947ab88c80db7d69a3ada1bf2e6ca00bc66384aaccb0ff136] <==
	I1018 18:26:00.644337       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 18:26:00.644412       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 18:26:00.644432       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 18:26:00.661021       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 18:26:00.661468       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 18:26:00.661484       1 policy_source.go:240] refreshing policies
	I1018 18:26:00.662569       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 18:26:00.662586       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 18:26:00.681932       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 18:26:00.682629       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 18:26:00.683126       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 18:26:00.686421       1 cache.go:39] Caches are synced for autoregister controller
	I1018 18:26:00.710314       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 18:26:00.799370       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1018 18:26:00.873899       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 18:26:00.894266       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 18:26:02.263916       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 18:26:02.462718       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 18:26:02.711944       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 18:26:02.796172       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 18:26:03.135284       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.56.83"}
	I1018 18:26:03.190900       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.35.104"}
	I1018 18:26:06.093292       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 18:26:06.143610       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 18:26:06.322897       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6974399a43a070944c3ef86eb0363ba4bca8f5c775d0d5143be212a028542142] <==
	I1018 18:26:05.697615       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 18:26:05.700360       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 18:26:05.700389       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 18:26:05.702565       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 18:26:05.711891       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 18:26:05.711994       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 18:26:05.711897       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 18:26:05.712056       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 18:26:05.712093       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 18:26:05.712104       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 18:26:05.712110       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 18:26:05.718155       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 18:26:05.719361       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 18:26:05.720485       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 18:26:05.721900       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 18:26:05.724482       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 18:26:05.724802       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 18:26:05.729222       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 18:26:05.731894       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 18:26:05.735837       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 18:26:05.736766       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 18:26:05.736799       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 18:26:05.741172       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 18:26:05.748766       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 18:26:05.759156       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	
	
	==> kube-proxy [2a3d029247f0f2961084715b69f7ac4e03f5bd09abdd133077b57c82216aefd4] <==
	I1018 18:26:03.109792       1 server_linux.go:53] "Using iptables proxy"
	I1018 18:26:03.219051       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 18:26:03.320186       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 18:26:03.320266       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 18:26:03.320346       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 18:26:03.344293       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 18:26:03.344423       1 server_linux.go:132] "Using iptables Proxier"
	I1018 18:26:03.348393       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 18:26:03.348766       1 server.go:527] "Version info" version="v1.34.1"
	I1018 18:26:03.349008       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:26:03.350330       1 config.go:200] "Starting service config controller"
	I1018 18:26:03.350388       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 18:26:03.350429       1 config.go:106] "Starting endpoint slice config controller"
	I1018 18:26:03.350455       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 18:26:03.350507       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 18:26:03.350534       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 18:26:03.351302       1 config.go:309] "Starting node config controller"
	I1018 18:26:03.351354       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 18:26:03.351382       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 18:26:03.450916       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 18:26:03.450925       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 18:26:03.450959       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d2a6df964e5a27a75360411f0fbe62d805660605d883656062b3e9b3c98ffc61] <==
	I1018 18:25:58.647753       1 serving.go:386] Generated self-signed cert in-memory
	I1018 18:26:02.188157       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 18:26:02.188266       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:26:02.209831       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 18:26:02.209961       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 18:26:02.209983       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 18:26:02.210010       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 18:26:02.213096       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:26:02.213110       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:26:02.213138       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:26:02.213144       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:26:02.311649       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 18:26:02.314976       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:26:02.315995       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 18 18:26:06 no-preload-729957 kubelet[765]: I1018 18:26:06.286107     765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c399e443-ef3f-4155-9f03-484901165b54-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-dq5cz\" (UID: \"c399e443-ef3f-4155-9f03-484901165b54\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dq5cz"
	Oct 18 18:26:06 no-preload-729957 kubelet[765]: I1018 18:26:06.286693     765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2mxv\" (UniqueName: \"kubernetes.io/projected/c399e443-ef3f-4155-9f03-484901165b54-kube-api-access-b2mxv\") pod \"kubernetes-dashboard-855c9754f9-dq5cz\" (UID: \"c399e443-ef3f-4155-9f03-484901165b54\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dq5cz"
	Oct 18 18:26:06 no-preload-729957 kubelet[765]: I1018 18:26:06.286800     765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pllsk\" (UniqueName: \"kubernetes.io/projected/6be4ae09-b8e8-4a46-8751-4d264f7697ab-kube-api-access-pllsk\") pod \"dashboard-metrics-scraper-6ffb444bf9-2jw6d\" (UID: \"6be4ae09-b8e8-4a46-8751-4d264f7697ab\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2jw6d"
	Oct 18 18:26:06 no-preload-729957 kubelet[765]: I1018 18:26:06.286890     765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6be4ae09-b8e8-4a46-8751-4d264f7697ab-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-2jw6d\" (UID: \"6be4ae09-b8e8-4a46-8751-4d264f7697ab\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2jw6d"
	Oct 18 18:26:06 no-preload-729957 kubelet[765]: W1018 18:26:06.594535     765 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673/crio-3a4d29ccc9199d180e601ab1694cef753061bd19651ca598ef308c874a2bb2ae WatchSource:0}: Error finding container 3a4d29ccc9199d180e601ab1694cef753061bd19651ca598ef308c874a2bb2ae: Status 404 returned error can't find the container with id 3a4d29ccc9199d180e601ab1694cef753061bd19651ca598ef308c874a2bb2ae
	Oct 18 18:26:06 no-preload-729957 kubelet[765]: W1018 18:26:06.611613     765 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673/crio-7333b2aa64d8d15687e3137bff890d9d49b81245b5fb3234dc547004b7d43f16 WatchSource:0}: Error finding container 7333b2aa64d8d15687e3137bff890d9d49b81245b5fb3234dc547004b7d43f16: Status 404 returned error can't find the container with id 7333b2aa64d8d15687e3137bff890d9d49b81245b5fb3234dc547004b7d43f16
	Oct 18 18:26:11 no-preload-729957 kubelet[765]: I1018 18:26:11.742471     765 scope.go:117] "RemoveContainer" containerID="c23437207413cefff51a6912eb55b1b0b5065130bf4655ded8d7862bd43595fd"
	Oct 18 18:26:12 no-preload-729957 kubelet[765]: I1018 18:26:12.747118     765 scope.go:117] "RemoveContainer" containerID="c23437207413cefff51a6912eb55b1b0b5065130bf4655ded8d7862bd43595fd"
	Oct 18 18:26:12 no-preload-729957 kubelet[765]: I1018 18:26:12.747410     765 scope.go:117] "RemoveContainer" containerID="746019b0ec9b8fbc991561dd0fafcc67d3592ed100d0dd188df361fec5595531"
	Oct 18 18:26:12 no-preload-729957 kubelet[765]: E1018 18:26:12.747551     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2jw6d_kubernetes-dashboard(6be4ae09-b8e8-4a46-8751-4d264f7697ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2jw6d" podUID="6be4ae09-b8e8-4a46-8751-4d264f7697ab"
	Oct 18 18:26:13 no-preload-729957 kubelet[765]: I1018 18:26:13.754716     765 scope.go:117] "RemoveContainer" containerID="746019b0ec9b8fbc991561dd0fafcc67d3592ed100d0dd188df361fec5595531"
	Oct 18 18:26:13 no-preload-729957 kubelet[765]: E1018 18:26:13.755312     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2jw6d_kubernetes-dashboard(6be4ae09-b8e8-4a46-8751-4d264f7697ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2jw6d" podUID="6be4ae09-b8e8-4a46-8751-4d264f7697ab"
	Oct 18 18:26:16 no-preload-729957 kubelet[765]: I1018 18:26:16.560154     765 scope.go:117] "RemoveContainer" containerID="746019b0ec9b8fbc991561dd0fafcc67d3592ed100d0dd188df361fec5595531"
	Oct 18 18:26:16 no-preload-729957 kubelet[765]: E1018 18:26:16.560930     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2jw6d_kubernetes-dashboard(6be4ae09-b8e8-4a46-8751-4d264f7697ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2jw6d" podUID="6be4ae09-b8e8-4a46-8751-4d264f7697ab"
	Oct 18 18:26:28 no-preload-729957 kubelet[765]: I1018 18:26:28.599439     765 scope.go:117] "RemoveContainer" containerID="746019b0ec9b8fbc991561dd0fafcc67d3592ed100d0dd188df361fec5595531"
	Oct 18 18:26:28 no-preload-729957 kubelet[765]: I1018 18:26:28.796203     765 scope.go:117] "RemoveContainer" containerID="746019b0ec9b8fbc991561dd0fafcc67d3592ed100d0dd188df361fec5595531"
	Oct 18 18:26:28 no-preload-729957 kubelet[765]: I1018 18:26:28.796486     765 scope.go:117] "RemoveContainer" containerID="471127644b325b85c5c10f6876205a690ae590a617ae0a3345a5d15788948065"
	Oct 18 18:26:28 no-preload-729957 kubelet[765]: E1018 18:26:28.796655     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2jw6d_kubernetes-dashboard(6be4ae09-b8e8-4a46-8751-4d264f7697ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2jw6d" podUID="6be4ae09-b8e8-4a46-8751-4d264f7697ab"
	Oct 18 18:26:28 no-preload-729957 kubelet[765]: I1018 18:26:28.839246     765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dq5cz" podStartSLOduration=11.340258849 podStartE2EDuration="22.839229132s" podCreationTimestamp="2025-10-18 18:26:06 +0000 UTC" firstStartedPulling="2025-10-18 18:26:06.615676469 +0000 UTC m=+14.330519737" lastFinishedPulling="2025-10-18 18:26:18.114646752 +0000 UTC m=+25.829490020" observedRunningTime="2025-10-18 18:26:18.789137653 +0000 UTC m=+26.503980937" watchObservedRunningTime="2025-10-18 18:26:28.839229132 +0000 UTC m=+36.554072400"
	Oct 18 18:26:33 no-preload-729957 kubelet[765]: I1018 18:26:33.812309     765 scope.go:117] "RemoveContainer" containerID="0fa470fa642abc5faf16ee6eb2a3332179be9e9bd3853405ee4a917524746026"
	Oct 18 18:26:36 no-preload-729957 kubelet[765]: I1018 18:26:36.560055     765 scope.go:117] "RemoveContainer" containerID="471127644b325b85c5c10f6876205a690ae590a617ae0a3345a5d15788948065"
	Oct 18 18:26:36 no-preload-729957 kubelet[765]: E1018 18:26:36.560235     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2jw6d_kubernetes-dashboard(6be4ae09-b8e8-4a46-8751-4d264f7697ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2jw6d" podUID="6be4ae09-b8e8-4a46-8751-4d264f7697ab"
	Oct 18 18:26:48 no-preload-729957 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 18:26:48 no-preload-729957 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 18:26:48 no-preload-729957 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [7624f6abd459809bd3046f1c044b4a4b33cb3de17198c331adda43e222af9966] <==
	2025/10/18 18:26:18 Using namespace: kubernetes-dashboard
	2025/10/18 18:26:18 Using in-cluster config to connect to apiserver
	2025/10/18 18:26:18 Using secret token for csrf signing
	2025/10/18 18:26:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 18:26:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 18:26:18 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 18:26:18 Generating JWE encryption key
	2025/10/18 18:26:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 18:26:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 18:26:19 Initializing JWE encryption key from synchronized object
	2025/10/18 18:26:19 Creating in-cluster Sidecar client
	2025/10/18 18:26:19 Serving insecurely on HTTP port: 9090
	2025/10/18 18:26:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 18:26:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 18:26:18 Starting overwatch
	
	
	==> storage-provisioner [0fa470fa642abc5faf16ee6eb2a3332179be9e9bd3853405ee4a917524746026] <==
	I1018 18:26:03.153983       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 18:26:33.159644       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f8531bead1ef2e3eada16cba59c589956dc484ab1cff2f47a411a5f3dbd97427] <==
	I1018 18:26:33.911050       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 18:26:33.944334       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 18:26:33.944452       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 18:26:33.948518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:26:37.416189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:26:41.677334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:26:45.280707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:26:48.335851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:26:51.358286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:26:51.363997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 18:26:51.364182       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 18:26:51.364429       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-729957_748a7a91-4c2e-4054-994d-961dc505ea48!
	I1018 18:26:51.366023       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"707a2d03-df04-488e-b561-b69c9acdb2d6", APIVersion:"v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-729957_748a7a91-4c2e-4054-994d-961dc505ea48 became leader
	W1018 18:26:51.373892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:26:51.378472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 18:26:51.464852       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-729957_748a7a91-4c2e-4054-994d-961dc505ea48!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-729957 -n no-preload-729957
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-729957 -n no-preload-729957: exit status 2 (370.650684ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-729957 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-729957
helpers_test.go:243: (dbg) docker inspect no-preload-729957:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673",
	        "Created": "2025-10-18T18:24:12.31875014Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 221364,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T18:25:44.685537731Z",
	            "FinishedAt": "2025-10-18T18:25:43.457083285Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673/hostname",
	        "HostsPath": "/var/lib/docker/containers/26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673/hosts",
	        "LogPath": "/var/lib/docker/containers/26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673/26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673-json.log",
	        "Name": "/no-preload-729957",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-729957:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-729957",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673",
	                "LowerDir": "/var/lib/docker/overlay2/23e3b3ca1f79e937b59a52dcaa595b90f6276c9c388c3cfb57d1e199b659f3cd-init/diff:/var/lib/docker/overlay2/584ab177b02ad2db5330471b7171ad39934c457d8615b9ee4939a04b59f78474/diff",
	                "MergedDir": "/var/lib/docker/overlay2/23e3b3ca1f79e937b59a52dcaa595b90f6276c9c388c3cfb57d1e199b659f3cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/23e3b3ca1f79e937b59a52dcaa595b90f6276c9c388c3cfb57d1e199b659f3cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/23e3b3ca1f79e937b59a52dcaa595b90f6276c9c388c3cfb57d1e199b659f3cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-729957",
	                "Source": "/var/lib/docker/volumes/no-preload-729957/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-729957",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-729957",
	                "name.minikube.sigs.k8s.io": "no-preload-729957",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "553f21ec8a7c91699b09c42ba4ac9cb745f0346087552b1219544a0a9cff0d07",
	            "SandboxKey": "/var/run/docker/netns/553f21ec8a7c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-729957": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:66:ed:76:2c:92",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9171cfee9247515a7d76872523f6d046330152cbb9ee1a62de7b40aaab7a7a81",
	                    "EndpointID": "a7be7c6c91b648fc85680750cdbea044c402b0173d749aa2b2f1d7ab67845f2b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-729957",
	                        "26cea4068f8d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-729957 -n no-preload-729957
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-729957 -n no-preload-729957: exit status 2 (347.83585ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-729957 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-729957 logs -n 25: (1.296710123s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-192562 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-192562                                                                                                                                                                                                               │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ delete  │ -p default-k8s-diff-port-192562                                                                                                                                                                                                               │ default-k8s-diff-port-192562 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ delete  │ -p disable-driver-mounts-747178                                                                                                                                                                                                               │ disable-driver-mounts-747178 │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ start   │ -p no-preload-729957 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:25 UTC │
	│ image   │ embed-certs-213943 image list --format=json                                                                                                                                                                                                   │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ pause   │ -p embed-certs-213943 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │                     │
	│ delete  │ -p embed-certs-213943                                                                                                                                                                                                                         │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ delete  │ -p embed-certs-213943                                                                                                                                                                                                                         │ embed-certs-213943           │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:24 UTC │
	│ start   │ -p newest-cni-530891 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:24 UTC │ 18 Oct 25 18:25 UTC │
	│ addons  │ enable metrics-server -p newest-cni-530891 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-729957 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │                     │
	│ stop    │ -p newest-cni-530891 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ addons  │ enable dashboard -p newest-cni-530891 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ start   │ -p newest-cni-530891 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ stop    │ -p no-preload-729957 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ addons  │ enable dashboard -p no-preload-729957 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ start   │ -p no-preload-729957 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:26 UTC │
	│ image   │ newest-cni-530891 image list --format=json                                                                                                                                                                                                    │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ pause   │ -p newest-cni-530891 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │                     │
	│ delete  │ -p newest-cni-530891                                                                                                                                                                                                                          │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ delete  │ -p newest-cni-530891                                                                                                                                                                                                                          │ newest-cni-530891            │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │ 18 Oct 25 18:25 UTC │
	│ start   │ -p auto-111074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-111074                  │ jenkins │ v1.37.0 │ 18 Oct 25 18:25 UTC │                     │
	│ image   │ no-preload-729957 image list --format=json                                                                                                                                                                                                    │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:26 UTC │ 18 Oct 25 18:26 UTC │
	│ pause   │ -p no-preload-729957 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-729957            │ jenkins │ v1.37.0 │ 18 Oct 25 18:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 18:25:57
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 18:25:57.609163  224323 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:25:57.609392  224323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:25:57.609416  224323 out.go:374] Setting ErrFile to fd 2...
	I1018 18:25:57.609436  224323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:25:57.609711  224323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:25:57.610133  224323 out.go:368] Setting JSON to false
	I1018 18:25:57.611071  224323 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7707,"bootTime":1760804251,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 18:25:57.611158  224323 start.go:141] virtualization:  
	I1018 18:25:57.615064  224323 out.go:179] * [auto-111074] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 18:25:57.619275  224323 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 18:25:57.619342  224323 notify.go:220] Checking for updates...
	I1018 18:25:57.626141  224323 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 18:25:57.629155  224323 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:25:57.632037  224323 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 18:25:57.635011  224323 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 18:25:57.638034  224323 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 18:25:57.641571  224323 config.go:182] Loaded profile config "no-preload-729957": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:25:57.641672  224323 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 18:25:57.704477  224323 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 18:25:57.704601  224323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:25:57.836454  224323 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 18:25:57.82641665 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:25:57.836552  224323 docker.go:318] overlay module found
	I1018 18:25:57.839750  224323 out.go:179] * Using the docker driver based on user configuration
	I1018 18:25:57.842672  224323 start.go:305] selected driver: docker
	I1018 18:25:57.842691  224323 start.go:925] validating driver "docker" against <nil>
	I1018 18:25:57.842705  224323 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 18:25:57.843384  224323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:25:57.947183  224323 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 18:25:57.935286064 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:25:57.947336  224323 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 18:25:57.947540  224323 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:25:57.950493  224323 out.go:179] * Using Docker driver with root privileges
	I1018 18:25:57.953450  224323 cni.go:84] Creating CNI manager for ""
	I1018 18:25:57.953520  224323 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:25:57.953532  224323 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 18:25:57.953606  224323 start.go:349] cluster config:
	{Name:auto-111074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-111074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1018 18:25:57.956774  224323 out.go:179] * Starting "auto-111074" primary control-plane node in "auto-111074" cluster
	I1018 18:25:57.959532  224323 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 18:25:57.962389  224323 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 18:25:57.965062  224323 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:25:57.965107  224323 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 18:25:57.965117  224323 cache.go:58] Caching tarball of preloaded images
	I1018 18:25:57.965204  224323 preload.go:233] Found /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 18:25:57.965221  224323 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 18:25:57.965332  224323 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/config.json ...
	I1018 18:25:57.965352  224323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/config.json: {Name:mkbb82346508b84aaf227169a59c31534a3f406d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:25:57.965497  224323 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 18:25:57.998536  224323 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 18:25:57.998555  224323 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 18:25:57.998567  224323 cache.go:232] Successfully downloaded all kic artifacts
	I1018 18:25:57.998588  224323 start.go:360] acquireMachinesLock for auto-111074: {Name:mk75369a1a9bfcfe98d7f880f24bb4d102e5b8ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 18:25:57.998680  224323 start.go:364] duration metric: took 77.088µs to acquireMachinesLock for "auto-111074"
	I1018 18:25:57.998707  224323 start.go:93] Provisioning new machine with config: &{Name:auto-111074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-111074 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 18:25:57.998772  224323 start.go:125] createHost starting for "" (driver="docker")
	I1018 18:25:54.307067  221240 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 18:25:54.307087  221240 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 18:25:54.393328  221240 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 18:25:54.393349  221240 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 18:25:54.466168  221240 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 18:25:54.466185  221240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 18:25:54.519984  221240 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 18:25:54.520006  221240 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 18:25:54.549443  221240 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 18:25:54.549465  221240 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 18:25:54.570273  221240 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 18:25:54.570294  221240 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 18:25:54.617388  221240 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 18:25:54.617409  221240 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 18:25:54.651310  221240 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 18:25:54.651330  221240 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 18:25:54.677285  221240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 18:25:58.002196  224323 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 18:25:58.002471  224323 start.go:159] libmachine.API.Create for "auto-111074" (driver="docker")
	I1018 18:25:58.002517  224323 client.go:168] LocalClient.Create starting
	I1018 18:25:58.002595  224323 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem
	I1018 18:25:58.002628  224323 main.go:141] libmachine: Decoding PEM data...
	I1018 18:25:58.002642  224323 main.go:141] libmachine: Parsing certificate...
	I1018 18:25:58.002706  224323 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem
	I1018 18:25:58.002725  224323 main.go:141] libmachine: Decoding PEM data...
	I1018 18:25:58.002736  224323 main.go:141] libmachine: Parsing certificate...
	I1018 18:25:58.003154  224323 cli_runner.go:164] Run: docker network inspect auto-111074 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 18:25:58.030563  224323 cli_runner.go:211] docker network inspect auto-111074 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 18:25:58.030658  224323 network_create.go:284] running [docker network inspect auto-111074] to gather additional debugging logs...
	I1018 18:25:58.030675  224323 cli_runner.go:164] Run: docker network inspect auto-111074
	W1018 18:25:58.046975  224323 cli_runner.go:211] docker network inspect auto-111074 returned with exit code 1
	I1018 18:25:58.047002  224323 network_create.go:287] error running [docker network inspect auto-111074]: docker network inspect auto-111074: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-111074 not found
	I1018 18:25:58.047023  224323 network_create.go:289] output of [docker network inspect auto-111074]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-111074 not found
	
	** /stderr **
	I1018 18:25:58.047116  224323 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:25:58.063221  224323 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-903568cdf824 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:7a:80:c0:8c:ed} reservation:<nil>}
	I1018 18:25:58.063532  224323 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ee9fcaab9ca8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a6:a7:65:1b:c0:41} reservation:<nil>}
	I1018 18:25:58.063833  224323 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-414fc11e154b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:86:f0:a8:1a:86:00} reservation:<nil>}
	I1018 18:25:58.064076  224323 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-9171cfee9247 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e6:21:8a:96:2d:4e} reservation:<nil>}
	I1018 18:25:58.064486  224323 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a13100}
	I1018 18:25:58.064505  224323 network_create.go:124] attempt to create docker network auto-111074 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 18:25:58.064569  224323 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-111074 auto-111074
	I1018 18:25:58.151076  224323 network_create.go:108] docker network auto-111074 192.168.85.0/24 created
	I1018 18:25:58.151102  224323 kic.go:121] calculated static IP "192.168.85.2" for the "auto-111074" container
	I1018 18:25:58.151175  224323 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 18:25:58.174584  224323 cli_runner.go:164] Run: docker volume create auto-111074 --label name.minikube.sigs.k8s.io=auto-111074 --label created_by.minikube.sigs.k8s.io=true
	I1018 18:25:58.215861  224323 oci.go:103] Successfully created a docker volume auto-111074
	I1018 18:25:58.215936  224323 cli_runner.go:164] Run: docker run --rm --name auto-111074-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-111074 --entrypoint /usr/bin/test -v auto-111074:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 18:25:58.973948  224323 oci.go:107] Successfully prepared a docker volume auto-111074
	I1018 18:25:58.973995  224323 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:25:58.974030  224323 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 18:25:58.974100  224323 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-111074:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 18:26:02.721104  221240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.573396552s)
	I1018 18:26:02.721160  221240 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.567505809s)
	I1018 18:26:02.721188  221240 node_ready.go:35] waiting up to 6m0s for node "no-preload-729957" to be "Ready" ...
	I1018 18:26:02.721483  221240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.528374311s)
	I1018 18:26:02.775049  221240 node_ready.go:49] node "no-preload-729957" is "Ready"
	I1018 18:26:02.775082  221240 node_ready.go:38] duration metric: took 53.876862ms for node "no-preload-729957" to be "Ready" ...
	I1018 18:26:02.775098  221240 api_server.go:52] waiting for apiserver process to appear ...
	I1018 18:26:02.775176  221240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 18:26:03.197456  221240 api_server.go:72] duration metric: took 9.551715057s to wait for apiserver process to appear ...
	I1018 18:26:03.197479  221240 api_server.go:88] waiting for apiserver healthz status ...
	I1018 18:26:03.197498  221240 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 18:26:03.197816  221240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.520495745s)
	I1018 18:26:03.206414  221240 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 18:26:03.206444  221240 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 18:26:03.207087  221240 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-729957 addons enable metrics-server
	
	I1018 18:26:03.217445  221240 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1018 18:26:03.223723  221240 addons.go:514] duration metric: took 9.577568493s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1018 18:26:03.698973  221240 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 18:26:03.728363  221240 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 18:26:03.729870  221240 api_server.go:141] control plane version: v1.34.1
	I1018 18:26:03.729891  221240 api_server.go:131] duration metric: took 532.404803ms to wait for apiserver health ...
	I1018 18:26:03.729901  221240 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 18:26:03.753759  221240 system_pods.go:59] 8 kube-system pods found
	I1018 18:26:03.753794  221240 system_pods.go:61] "coredns-66bc5c9577-q7mng" [365b51ac-c2aa-4247-a37e-ef5ce5d54a36] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:26:03.753806  221240 system_pods.go:61] "etcd-no-preload-729957" [29023f58-84ea-44ad-b6e8-cc5cf720a4be] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 18:26:03.753815  221240 system_pods.go:61] "kindnet-4hbt7" [6c9fa05f-7c37-442d-b3fa-ee037c865d3e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 18:26:03.753823  221240 system_pods.go:61] "kube-apiserver-no-preload-729957" [ea721a8e-b407-4422-b1c1-dc40032787ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 18:26:03.753832  221240 system_pods.go:61] "kube-controller-manager-no-preload-729957" [bf889e9e-777e-403a-b4ef-3582a86bafbb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 18:26:03.753840  221240 system_pods.go:61] "kube-proxy-75znn" [c6f7e4f1-ccc0-40c5-b449-fb42e743f373] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 18:26:03.753847  221240 system_pods.go:61] "kube-scheduler-no-preload-729957" [fa436526-c2f9-43b9-a48e-57dc63916082] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 18:26:03.753851  221240 system_pods.go:61] "storage-provisioner" [4bef6a17-c67c-4394-837e-c20c6378a6ed] Running
	I1018 18:26:03.753857  221240 system_pods.go:74] duration metric: took 23.943758ms to wait for pod list to return data ...
	I1018 18:26:03.753865  221240 default_sa.go:34] waiting for default service account to be created ...
	I1018 18:26:03.761524  221240 default_sa.go:45] found service account: "default"
	I1018 18:26:03.761547  221240 default_sa.go:55] duration metric: took 7.676349ms for default service account to be created ...
	I1018 18:26:03.761558  221240 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 18:26:03.774696  221240 system_pods.go:86] 8 kube-system pods found
	I1018 18:26:03.774731  221240 system_pods.go:89] "coredns-66bc5c9577-q7mng" [365b51ac-c2aa-4247-a37e-ef5ce5d54a36] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 18:26:03.774750  221240 system_pods.go:89] "etcd-no-preload-729957" [29023f58-84ea-44ad-b6e8-cc5cf720a4be] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 18:26:03.774757  221240 system_pods.go:89] "kindnet-4hbt7" [6c9fa05f-7c37-442d-b3fa-ee037c865d3e] Running
	I1018 18:26:03.774765  221240 system_pods.go:89] "kube-apiserver-no-preload-729957" [ea721a8e-b407-4422-b1c1-dc40032787ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 18:26:03.774772  221240 system_pods.go:89] "kube-controller-manager-no-preload-729957" [bf889e9e-777e-403a-b4ef-3582a86bafbb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 18:26:03.774778  221240 system_pods.go:89] "kube-proxy-75znn" [c6f7e4f1-ccc0-40c5-b449-fb42e743f373] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 18:26:03.774785  221240 system_pods.go:89] "kube-scheduler-no-preload-729957" [fa436526-c2f9-43b9-a48e-57dc63916082] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 18:26:03.774789  221240 system_pods.go:89] "storage-provisioner" [4bef6a17-c67c-4394-837e-c20c6378a6ed] Running
	I1018 18:26:03.774796  221240 system_pods.go:126] duration metric: took 13.233409ms to wait for k8s-apps to be running ...
	I1018 18:26:03.774805  221240 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 18:26:03.774954  221240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:26:03.794488  221240 system_svc.go:56] duration metric: took 19.673369ms WaitForService to wait for kubelet
	I1018 18:26:03.794515  221240 kubeadm.go:586] duration metric: took 10.148777857s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 18:26:03.794533  221240 node_conditions.go:102] verifying NodePressure condition ...
	I1018 18:26:03.802993  221240 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 18:26:03.803020  221240 node_conditions.go:123] node cpu capacity is 2
	I1018 18:26:03.803032  221240 node_conditions.go:105] duration metric: took 8.493812ms to run NodePressure ...
	I1018 18:26:03.803045  221240 start.go:241] waiting for startup goroutines ...
	I1018 18:26:03.803053  221240 start.go:246] waiting for cluster config update ...
	I1018 18:26:03.803064  221240 start.go:255] writing updated cluster config ...
	I1018 18:26:03.803408  221240 ssh_runner.go:195] Run: rm -f paused
	I1018 18:26:03.810508  221240 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:26:03.815687  221240 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q7mng" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:26:03.675174  224323 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-111074:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.701023076s)
	I1018 18:26:03.675275  224323 kic.go:203] duration metric: took 4.70125498s to extract preloaded images to volume ...
	W1018 18:26:03.675417  224323 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 18:26:03.675530  224323 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 18:26:03.788240  224323 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-111074 --name auto-111074 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-111074 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-111074 --network auto-111074 --ip 192.168.85.2 --volume auto-111074:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 18:26:04.116770  224323 cli_runner.go:164] Run: docker container inspect auto-111074 --format={{.State.Running}}
	I1018 18:26:04.137918  224323 cli_runner.go:164] Run: docker container inspect auto-111074 --format={{.State.Status}}
	I1018 18:26:04.162125  224323 cli_runner.go:164] Run: docker exec auto-111074 stat /var/lib/dpkg/alternatives/iptables
	I1018 18:26:04.219459  224323 oci.go:144] the created container "auto-111074" has a running status.
	I1018 18:26:04.219505  224323 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/auto-111074/id_rsa...
	I1018 18:26:05.302290  224323 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21409-2509/.minikube/machines/auto-111074/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 18:26:05.327910  224323 cli_runner.go:164] Run: docker container inspect auto-111074 --format={{.State.Status}}
	I1018 18:26:05.346531  224323 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 18:26:05.346554  224323 kic_runner.go:114] Args: [docker exec --privileged auto-111074 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 18:26:05.389711  224323 cli_runner.go:164] Run: docker container inspect auto-111074 --format={{.State.Status}}
	I1018 18:26:05.406668  224323 machine.go:93] provisionDockerMachine start ...
	I1018 18:26:05.406794  224323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-111074
	I1018 18:26:05.425480  224323 main.go:141] libmachine: Using SSH client type: native
	I1018 18:26:05.425836  224323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1018 18:26:05.425857  224323 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 18:26:05.426484  224323 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56442->127.0.0.1:33093: read: connection reset by peer
	W1018 18:26:05.821867  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	W1018 18:26:07.823007  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	I1018 18:26:08.592684  224323 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-111074
	
	I1018 18:26:08.592761  224323 ubuntu.go:182] provisioning hostname "auto-111074"
	I1018 18:26:08.592878  224323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-111074
	I1018 18:26:08.617878  224323 main.go:141] libmachine: Using SSH client type: native
	I1018 18:26:08.618188  224323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1018 18:26:08.618200  224323 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-111074 && echo "auto-111074" | sudo tee /etc/hostname
	I1018 18:26:08.794584  224323 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-111074
	
	I1018 18:26:08.794741  224323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-111074
	I1018 18:26:08.832863  224323 main.go:141] libmachine: Using SSH client type: native
	I1018 18:26:08.833294  224323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1018 18:26:08.833316  224323 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-111074' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-111074/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-111074' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 18:26:08.989204  224323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 18:26:08.989281  224323 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21409-2509/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-2509/.minikube}
	I1018 18:26:08.989351  224323 ubuntu.go:190] setting up certificates
	I1018 18:26:08.989386  224323 provision.go:84] configureAuth start
	I1018 18:26:08.989492  224323 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-111074
	I1018 18:26:09.014400  224323 provision.go:143] copyHostCerts
	I1018 18:26:09.014459  224323 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem, removing ...
	I1018 18:26:09.014469  224323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem
	I1018 18:26:09.014542  224323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/ca.pem (1078 bytes)
	I1018 18:26:09.014655  224323 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem, removing ...
	I1018 18:26:09.014661  224323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem
	I1018 18:26:09.014690  224323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/cert.pem (1123 bytes)
	I1018 18:26:09.014744  224323 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem, removing ...
	I1018 18:26:09.014749  224323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem
	I1018 18:26:09.014771  224323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-2509/.minikube/key.pem (1675 bytes)
	I1018 18:26:09.014832  224323 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem org=jenkins.auto-111074 san=[127.0.0.1 192.168.85.2 auto-111074 localhost minikube]
	I1018 18:26:09.412351  224323 provision.go:177] copyRemoteCerts
	I1018 18:26:09.412430  224323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 18:26:09.412477  224323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-111074
	I1018 18:26:09.438714  224323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/auto-111074/id_rsa Username:docker}
	I1018 18:26:09.557682  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1018 18:26:09.579667  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 18:26:09.601410  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 18:26:09.622889  224323 provision.go:87] duration metric: took 633.467396ms to configureAuth
	I1018 18:26:09.622913  224323 ubuntu.go:206] setting minikube options for container-runtime
	I1018 18:26:09.623095  224323 config.go:182] Loaded profile config "auto-111074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:26:09.623197  224323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-111074
	I1018 18:26:09.649200  224323 main.go:141] libmachine: Using SSH client type: native
	I1018 18:26:09.649520  224323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1018 18:26:09.649543  224323 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 18:26:09.964320  224323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 18:26:09.964347  224323 machine.go:96] duration metric: took 4.557654213s to provisionDockerMachine
	I1018 18:26:09.964357  224323 client.go:171] duration metric: took 11.961834302s to LocalClient.Create
	I1018 18:26:09.964372  224323 start.go:167] duration metric: took 11.961904145s to libmachine.API.Create "auto-111074"
	I1018 18:26:09.964379  224323 start.go:293] postStartSetup for "auto-111074" (driver="docker")
	I1018 18:26:09.964388  224323 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 18:26:09.964462  224323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 18:26:09.964512  224323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-111074
	I1018 18:26:09.994373  224323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/auto-111074/id_rsa Username:docker}
	I1018 18:26:10.106333  224323 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 18:26:10.111401  224323 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 18:26:10.111433  224323 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 18:26:10.111445  224323 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/addons for local assets ...
	I1018 18:26:10.111504  224323 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-2509/.minikube/files for local assets ...
	I1018 18:26:10.111591  224323 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem -> 43202.pem in /etc/ssl/certs
	I1018 18:26:10.111726  224323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 18:26:10.120862  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:26:10.146009  224323 start.go:296] duration metric: took 181.61568ms for postStartSetup
	I1018 18:26:10.149576  224323 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-111074
	I1018 18:26:10.170567  224323 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/config.json ...
	I1018 18:26:10.170871  224323 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 18:26:10.170922  224323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-111074
	I1018 18:26:10.196832  224323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/auto-111074/id_rsa Username:docker}
	I1018 18:26:10.306232  224323 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 18:26:10.311492  224323 start.go:128] duration metric: took 12.312705043s to createHost
	I1018 18:26:10.311513  224323 start.go:83] releasing machines lock for "auto-111074", held for 12.312825192s
	I1018 18:26:10.311583  224323 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-111074
	I1018 18:26:10.334307  224323 ssh_runner.go:195] Run: cat /version.json
	I1018 18:26:10.334365  224323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-111074
	I1018 18:26:10.334605  224323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 18:26:10.334661  224323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-111074
	I1018 18:26:10.370171  224323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/auto-111074/id_rsa Username:docker}
	I1018 18:26:10.370433  224323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/auto-111074/id_rsa Username:docker}
	I1018 18:26:10.493052  224323 ssh_runner.go:195] Run: systemctl --version
	I1018 18:26:10.597987  224323 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 18:26:10.665641  224323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 18:26:10.672264  224323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 18:26:10.672410  224323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 18:26:10.712083  224323 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 18:26:10.712114  224323 start.go:495] detecting cgroup driver to use...
	I1018 18:26:10.712147  224323 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 18:26:10.712209  224323 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 18:26:10.736157  224323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 18:26:10.756415  224323 docker.go:218] disabling cri-docker service (if available) ...
	I1018 18:26:10.756477  224323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 18:26:10.777168  224323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 18:26:10.807172  224323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 18:26:10.984041  224323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 18:26:11.145532  224323 docker.go:234] disabling docker service ...
	I1018 18:26:11.145672  224323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 18:26:11.177807  224323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 18:26:11.193063  224323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 18:26:11.348246  224323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 18:26:11.509912  224323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 18:26:11.523879  224323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 18:26:11.540287  224323 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 18:26:11.540352  224323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:26:11.550213  224323 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 18:26:11.550283  224323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:26:11.559662  224323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:26:11.568869  224323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:26:11.578837  224323 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 18:26:11.588416  224323 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:26:11.608740  224323 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:26:11.622291  224323 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 18:26:11.634232  224323 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 18:26:11.641619  224323 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 18:26:11.650371  224323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:26:11.800781  224323 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 18:26:12.297583  224323 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 18:26:12.297657  224323 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 18:26:12.306661  224323 start.go:563] Will wait 60s for crictl version
	I1018 18:26:12.306725  224323 ssh_runner.go:195] Run: which crictl
	I1018 18:26:12.311273  224323 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 18:26:12.363301  224323 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 18:26:12.363449  224323 ssh_runner.go:195] Run: crio --version
	I1018 18:26:12.426960  224323 ssh_runner.go:195] Run: crio --version
	I1018 18:26:12.470676  224323 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 18:26:12.473683  224323 cli_runner.go:164] Run: docker network inspect auto-111074 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 18:26:12.491437  224323 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 18:26:12.495816  224323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:26:12.506220  224323 kubeadm.go:883] updating cluster {Name:auto-111074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-111074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 18:26:12.506333  224323 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 18:26:12.506387  224323 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:26:12.543235  224323 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:26:12.543257  224323 crio.go:433] Images already preloaded, skipping extraction
	I1018 18:26:12.543319  224323 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 18:26:12.580200  224323 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 18:26:12.580271  224323 cache_images.go:85] Images are preloaded, skipping loading
	I1018 18:26:12.580293  224323 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 18:26:12.580422  224323 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-111074 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-111074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 18:26:12.580538  224323 ssh_runner.go:195] Run: crio config
	W1018 18:26:10.326349  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	W1018 18:26:12.326555  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	I1018 18:26:12.671128  224323 cni.go:84] Creating CNI manager for ""
	I1018 18:26:12.671199  224323 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:26:12.671235  224323 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 18:26:12.671285  224323 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-111074 NodeName:auto-111074 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 18:26:12.671482  224323 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-111074"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 18:26:12.671595  224323 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 18:26:12.679886  224323 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 18:26:12.680005  224323 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 18:26:12.688197  224323 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1018 18:26:12.702690  224323 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 18:26:12.716421  224323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1018 18:26:12.730143  224323 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 18:26:12.734392  224323 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 18:26:12.749394  224323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:26:12.916542  224323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:26:12.935174  224323 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074 for IP: 192.168.85.2
	I1018 18:26:12.935251  224323 certs.go:195] generating shared ca certs ...
	I1018 18:26:12.935283  224323 certs.go:227] acquiring lock for ca certs: {Name:mk544ed642fa2832e9f6dd22fa45f3270b7c1ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:26:12.935455  224323 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key
	I1018 18:26:12.935536  224323 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key
	I1018 18:26:12.935564  224323 certs.go:257] generating profile certs ...
	I1018 18:26:12.935650  224323 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/client.key
	I1018 18:26:12.935698  224323 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/client.crt with IP's: []
	I1018 18:26:13.288623  224323 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/client.crt ...
	I1018 18:26:13.288694  224323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/client.crt: {Name:mk474827a1ed79c079e368d33137d842f0296147 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:26:13.288950  224323 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/client.key ...
	I1018 18:26:13.288989  224323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/client.key: {Name:mk2f9cd4544e154adffdb5adb992c48be1817caa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:26:13.289128  224323 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/apiserver.key.861b719e
	I1018 18:26:13.289170  224323 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/apiserver.crt.861b719e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1018 18:26:13.759868  224323 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/apiserver.crt.861b719e ...
	I1018 18:26:13.759895  224323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/apiserver.crt.861b719e: {Name:mk41aca77eff0d161b4af3f3692bda3a4f33d81b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:26:13.760137  224323 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/apiserver.key.861b719e ...
	I1018 18:26:13.760151  224323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/apiserver.key.861b719e: {Name:mk7dd7a3d6cf6c973bf618351a13a78d20d534b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:26:13.760227  224323 certs.go:382] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/apiserver.crt.861b719e -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/apiserver.crt
	I1018 18:26:13.760300  224323 certs.go:386] copying /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/apiserver.key.861b719e -> /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/apiserver.key
	I1018 18:26:13.760351  224323 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/proxy-client.key
	I1018 18:26:13.760363  224323 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/proxy-client.crt with IP's: []
	I1018 18:26:14.660452  224323 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/proxy-client.crt ...
	I1018 18:26:14.660481  224323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/proxy-client.crt: {Name:mkbe5243faf10ae8c3dc239ca34f754fdd391948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:26:14.660647  224323 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/proxy-client.key ...
	I1018 18:26:14.660661  224323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/proxy-client.key: {Name:mk694638a1c79b9529d90c102bbc84f9dc4c7fb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:26:14.660835  224323 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem (1338 bytes)
	W1018 18:26:14.660877  224323 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320_empty.pem, impossibly tiny 0 bytes
	I1018 18:26:14.660890  224323 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 18:26:14.660916  224323 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/ca.pem (1078 bytes)
	I1018 18:26:14.660960  224323 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/cert.pem (1123 bytes)
	I1018 18:26:14.660996  224323 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/certs/key.pem (1675 bytes)
	I1018 18:26:14.661044  224323 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem (1708 bytes)
	I1018 18:26:14.661622  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 18:26:14.683484  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 18:26:14.702621  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 18:26:14.721020  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1018 18:26:14.756236  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1018 18:26:14.807965  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 18:26:14.830459  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 18:26:14.848762  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 18:26:14.866520  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 18:26:14.883946  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/certs/4320.pem --> /usr/share/ca-certificates/4320.pem (1338 bytes)
	I1018 18:26:14.901688  224323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/ssl/certs/43202.pem --> /usr/share/ca-certificates/43202.pem (1708 bytes)
	I1018 18:26:14.919785  224323 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 18:26:14.932779  224323 ssh_runner.go:195] Run: openssl version
	I1018 18:26:14.939957  224323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 18:26:14.948520  224323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:26:14.952563  224323 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:26:14.952636  224323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 18:26:14.993679  224323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 18:26:15.002016  224323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4320.pem && ln -fs /usr/share/ca-certificates/4320.pem /etc/ssl/certs/4320.pem"
	I1018 18:26:15.012310  224323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4320.pem
	I1018 18:26:15.018033  224323 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 17:19 /usr/share/ca-certificates/4320.pem
	I1018 18:26:15.018121  224323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4320.pem
	I1018 18:26:15.061247  224323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4320.pem /etc/ssl/certs/51391683.0"
	I1018 18:26:15.070303  224323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43202.pem && ln -fs /usr/share/ca-certificates/43202.pem /etc/ssl/certs/43202.pem"
	I1018 18:26:15.079640  224323 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43202.pem
	I1018 18:26:15.084416  224323 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 17:19 /usr/share/ca-certificates/43202.pem
	I1018 18:26:15.084499  224323 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43202.pem
	I1018 18:26:15.127025  224323 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43202.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 18:26:15.135884  224323 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 18:26:15.140357  224323 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 18:26:15.140409  224323 kubeadm.go:400] StartCluster: {Name:auto-111074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-111074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 18:26:15.140485  224323 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 18:26:15.140551  224323 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 18:26:15.169721  224323 cri.go:89] found id: ""
	I1018 18:26:15.169789  224323 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 18:26:15.179666  224323 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 18:26:15.188048  224323 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 18:26:15.188114  224323 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 18:26:15.198453  224323 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 18:26:15.198482  224323 kubeadm.go:157] found existing configuration files:
	
	I1018 18:26:15.198531  224323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 18:26:15.206784  224323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 18:26:15.206853  224323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 18:26:15.214381  224323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 18:26:15.222248  224323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 18:26:15.222313  224323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 18:26:15.229766  224323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 18:26:15.238050  224323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 18:26:15.238120  224323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 18:26:15.245567  224323 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 18:26:15.254379  224323 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 18:26:15.254441  224323 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 18:26:15.264256  224323 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 18:26:15.336831  224323 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 18:26:15.337373  224323 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 18:26:15.361584  224323 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 18:26:15.361665  224323 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 18:26:15.361709  224323 kubeadm.go:318] OS: Linux
	I1018 18:26:15.361761  224323 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 18:26:15.361816  224323 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 18:26:15.361869  224323 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 18:26:15.361923  224323 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 18:26:15.361977  224323 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 18:26:15.362035  224323 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 18:26:15.362086  224323 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 18:26:15.362141  224323 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 18:26:15.362192  224323 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 18:26:15.484530  224323 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 18:26:15.484657  224323 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 18:26:15.484772  224323 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 18:26:15.496834  224323 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 18:26:15.504481  224323 out.go:252]   - Generating certificates and keys ...
	I1018 18:26:15.504577  224323 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 18:26:15.504661  224323 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 18:26:15.925815  224323 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 18:26:16.179151  224323 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 18:26:16.887629  224323 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 18:26:17.079724  224323 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	W1018 18:26:14.822799  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	W1018 18:26:17.322126  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	I1018 18:26:17.687268  224323 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 18:26:17.687786  224323 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-111074 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 18:26:18.606076  224323 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 18:26:18.606691  224323 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-111074 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 18:26:18.875886  224323 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 18:26:19.111440  224323 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 18:26:20.131362  224323 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 18:26:20.131691  224323 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 18:26:21.346817  224323 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 18:26:22.004898  224323 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	W1018 18:26:19.823747  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	W1018 18:26:21.824111  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	I1018 18:26:22.879275  224323 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 18:26:24.154575  224323 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 18:26:24.863902  224323 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 18:26:24.864622  224323 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 18:26:24.867137  224323 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 18:26:24.870862  224323 out.go:252]   - Booting up control plane ...
	I1018 18:26:24.870969  224323 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 18:26:24.871050  224323 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 18:26:24.871120  224323 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 18:26:24.900404  224323 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 18:26:24.900757  224323 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 18:26:24.907814  224323 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 18:26:24.908130  224323 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 18:26:24.908178  224323 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 18:26:25.048897  224323 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 18:26:25.049047  224323 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 18:26:27.554188  224323 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.501485681s
	I1018 18:26:27.554311  224323 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 18:26:27.554397  224323 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1018 18:26:27.554490  224323 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 18:26:27.554571  224323 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1018 18:26:24.321733  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	W1018 18:26:26.323672  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	W1018 18:26:28.832067  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	I1018 18:26:30.405395  224323 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.851245248s
	W1018 18:26:31.321247  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	W1018 18:26:33.322726  221240 pod_ready.go:104] pod "coredns-66bc5c9577-q7mng" is not "Ready", error: <nil>
	I1018 18:26:33.459212  224323 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.905272777s
	I1018 18:26:34.555198  224323 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.001321627s
	I1018 18:26:34.578525  224323 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 18:26:34.598306  224323 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 18:26:34.619606  224323 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 18:26:34.620032  224323 kubeadm.go:318] [mark-control-plane] Marking the node auto-111074 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 18:26:34.638455  224323 kubeadm.go:318] [bootstrap-token] Using token: 8ryud3.ooab0ywsri4uwu90
	I1018 18:26:34.641357  224323 out.go:252]   - Configuring RBAC rules ...
	I1018 18:26:34.641490  224323 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 18:26:34.658582  224323 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 18:26:34.669155  224323 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 18:26:34.674907  224323 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 18:26:34.685996  224323 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 18:26:34.692628  224323 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 18:26:34.972881  224323 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 18:26:35.387582  224323 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 18:26:35.963099  224323 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 18:26:35.964255  224323 kubeadm.go:318] 
	I1018 18:26:35.964342  224323 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 18:26:35.964352  224323 kubeadm.go:318] 
	I1018 18:26:35.964433  224323 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 18:26:35.964442  224323 kubeadm.go:318] 
	I1018 18:26:35.964468  224323 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 18:26:35.964534  224323 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 18:26:35.964599  224323 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 18:26:35.964609  224323 kubeadm.go:318] 
	I1018 18:26:35.964665  224323 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 18:26:35.964674  224323 kubeadm.go:318] 
	I1018 18:26:35.964724  224323 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 18:26:35.964732  224323 kubeadm.go:318] 
	I1018 18:26:35.964786  224323 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 18:26:35.964871  224323 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 18:26:35.964977  224323 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 18:26:35.964989  224323 kubeadm.go:318] 
	I1018 18:26:35.965078  224323 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 18:26:35.965166  224323 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 18:26:35.965175  224323 kubeadm.go:318] 
	I1018 18:26:35.965262  224323 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 8ryud3.ooab0ywsri4uwu90 \
	I1018 18:26:35.965374  224323 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d0244c5bf86cdf97546c6a22045cb6ed9d7ead524d9c98d9ca35da77d5d7a04d \
	I1018 18:26:35.965399  224323 kubeadm.go:318] 	--control-plane 
	I1018 18:26:35.965407  224323 kubeadm.go:318] 
	I1018 18:26:35.965495  224323 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 18:26:35.965503  224323 kubeadm.go:318] 
	I1018 18:26:35.965593  224323 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 8ryud3.ooab0ywsri4uwu90 \
	I1018 18:26:35.965723  224323 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:d0244c5bf86cdf97546c6a22045cb6ed9d7ead524d9c98d9ca35da77d5d7a04d 
	I1018 18:26:35.971248  224323 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 18:26:35.971486  224323 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 18:26:35.971602  224323 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 18:26:35.971621  224323 cni.go:84] Creating CNI manager for ""
	I1018 18:26:35.971629  224323 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 18:26:35.976648  224323 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 18:26:34.838899  221240 pod_ready.go:94] pod "coredns-66bc5c9577-q7mng" is "Ready"
	I1018 18:26:34.838924  221240 pod_ready.go:86] duration metric: took 31.023214459s for pod "coredns-66bc5c9577-q7mng" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:26:34.842319  221240 pod_ready.go:83] waiting for pod "etcd-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:26:34.847723  221240 pod_ready.go:94] pod "etcd-no-preload-729957" is "Ready"
	I1018 18:26:34.847750  221240 pod_ready.go:86] duration metric: took 5.404484ms for pod "etcd-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:26:34.850288  221240 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:26:34.854731  221240 pod_ready.go:94] pod "kube-apiserver-no-preload-729957" is "Ready"
	I1018 18:26:34.854757  221240 pod_ready.go:86] duration metric: took 4.442642ms for pod "kube-apiserver-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:26:34.856946  221240 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:26:35.020925  221240 pod_ready.go:94] pod "kube-controller-manager-no-preload-729957" is "Ready"
	I1018 18:26:35.021028  221240 pod_ready.go:86] duration metric: took 164.050245ms for pod "kube-controller-manager-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:26:35.219666  221240 pod_ready.go:83] waiting for pod "kube-proxy-75znn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:26:35.619633  221240 pod_ready.go:94] pod "kube-proxy-75znn" is "Ready"
	I1018 18:26:35.619706  221240 pod_ready.go:86] duration metric: took 399.945845ms for pod "kube-proxy-75znn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:26:35.818878  221240 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:26:36.219793  221240 pod_ready.go:94] pod "kube-scheduler-no-preload-729957" is "Ready"
	I1018 18:26:36.219871  221240 pod_ready.go:86] duration metric: took 400.95865ms for pod "kube-scheduler-no-preload-729957" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 18:26:36.219898  221240 pod_ready.go:40] duration metric: took 32.409356227s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 18:26:36.288922  221240 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 18:26:36.292160  221240 out.go:179] * Done! kubectl is now configured to use "no-preload-729957" cluster and "default" namespace by default
	I1018 18:26:35.979707  224323 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 18:26:35.986511  224323 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 18:26:35.986532  224323 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 18:26:36.011239  224323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 18:26:36.992453  224323 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 18:26:36.992627  224323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:26:36.992714  224323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-111074 minikube.k8s.io/updated_at=2025_10_18T18_26_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404 minikube.k8s.io/name=auto-111074 minikube.k8s.io/primary=true
	I1018 18:26:37.180863  224323 ops.go:34] apiserver oom_adj: -16
	I1018 18:26:37.181001  224323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:26:37.681712  224323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:26:38.181084  224323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:26:38.681864  224323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:26:39.181162  224323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:26:39.681884  224323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:26:40.181103  224323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:26:40.681989  224323 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 18:26:40.864312  224323 kubeadm.go:1113] duration metric: took 3.871741723s to wait for elevateKubeSystemPrivileges
	I1018 18:26:40.864344  224323 kubeadm.go:402] duration metric: took 25.723938625s to StartCluster
	I1018 18:26:40.864361  224323 settings.go:142] acquiring lock: {Name:mk3a3fd093bc95e20cc1842611fedcbe4a79e692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:26:40.864421  224323 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:26:40.865404  224323 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-2509/kubeconfig: {Name:mk229d7d0b6575771899e0ce8a346bbddf5cf86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 18:26:40.865637  224323 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 18:26:40.865739  224323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 18:26:40.865995  224323 config.go:182] Loaded profile config "auto-111074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:26:40.865971  224323 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 18:26:40.866063  224323 addons.go:69] Setting storage-provisioner=true in profile "auto-111074"
	I1018 18:26:40.866079  224323 addons.go:69] Setting default-storageclass=true in profile "auto-111074"
	I1018 18:26:40.866084  224323 addons.go:238] Setting addon storage-provisioner=true in "auto-111074"
	I1018 18:26:40.866095  224323 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-111074"
	I1018 18:26:40.866108  224323 host.go:66] Checking if "auto-111074" exists ...
	I1018 18:26:40.866405  224323 cli_runner.go:164] Run: docker container inspect auto-111074 --format={{.State.Status}}
	I1018 18:26:40.866594  224323 cli_runner.go:164] Run: docker container inspect auto-111074 --format={{.State.Status}}
	I1018 18:26:40.871435  224323 out.go:179] * Verifying Kubernetes components...
	I1018 18:26:40.876200  224323 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 18:26:40.907654  224323 addons.go:238] Setting addon default-storageclass=true in "auto-111074"
	I1018 18:26:40.907691  224323 host.go:66] Checking if "auto-111074" exists ...
	I1018 18:26:40.908095  224323 cli_runner.go:164] Run: docker container inspect auto-111074 --format={{.State.Status}}
	I1018 18:26:40.911154  224323 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 18:26:40.916309  224323 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:26:40.916340  224323 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 18:26:40.916406  224323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-111074
	I1018 18:26:40.953595  224323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/auto-111074/id_rsa Username:docker}
	I1018 18:26:40.954190  224323 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 18:26:40.954211  224323 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 18:26:40.954271  224323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-111074
	I1018 18:26:40.986211  224323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/auto-111074/id_rsa Username:docker}
	I1018 18:26:41.214567  224323 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 18:26:41.227892  224323 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 18:26:41.257710  224323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 18:26:41.285897  224323 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 18:26:41.801481  224323 node_ready.go:35] waiting up to 15m0s for node "auto-111074" to be "Ready" ...
	I1018 18:26:41.802391  224323 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1018 18:26:42.062919  224323 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 18:26:42.065836  224323 addons.go:514] duration metric: took 1.199856343s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 18:26:42.306785  224323 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-111074" context rescaled to 1 replicas
	W1018 18:26:43.804501  224323 node_ready.go:57] node "auto-111074" has "Ready":"False" status (will retry)
	W1018 18:26:45.805193  224323 node_ready.go:57] node "auto-111074" has "Ready":"False" status (will retry)
	W1018 18:26:48.304916  224323 node_ready.go:57] node "auto-111074" has "Ready":"False" status (will retry)
	W1018 18:26:50.804969  224323 node_ready.go:57] node "auto-111074" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 18 18:26:28 no-preload-729957 crio[647]: time="2025-10-18T18:26:28.828545196Z" level=info msg="Removed container 746019b0ec9b8fbc991561dd0fafcc67d3592ed100d0dd188df361fec5595531: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2jw6d/dashboard-metrics-scraper" id=5680264c-587f-4d6d-8f6b-19f70c30faf2 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 18:26:33 no-preload-729957 conmon[1158]: conmon 0fa470fa642abc5faf16 <ninfo>: container 1161 exited with status 1
	Oct 18 18:26:33 no-preload-729957 crio[647]: time="2025-10-18T18:26:33.81268602Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0b1deab0-631c-407d-8118-b91d5171fb57 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:26:33 no-preload-729957 crio[647]: time="2025-10-18T18:26:33.813866433Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=98f17027-1ff0-41a4-8928-d052babcfe56 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 18:26:33 no-preload-729957 crio[647]: time="2025-10-18T18:26:33.817358309Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=effb10e4-e923-42fd-89bf-f4d99794a40a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:26:33 no-preload-729957 crio[647]: time="2025-10-18T18:26:33.81761099Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:26:33 no-preload-729957 crio[647]: time="2025-10-18T18:26:33.832428058Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:26:33 no-preload-729957 crio[647]: time="2025-10-18T18:26:33.832610502Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0beac2733dfba336b2e80eec0a88ae0bdee5db7e7c971cb9557db83be595a6e0/merged/etc/passwd: no such file or directory"
	Oct 18 18:26:33 no-preload-729957 crio[647]: time="2025-10-18T18:26:33.832633772Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0beac2733dfba336b2e80eec0a88ae0bdee5db7e7c971cb9557db83be595a6e0/merged/etc/group: no such file or directory"
	Oct 18 18:26:33 no-preload-729957 crio[647]: time="2025-10-18T18:26:33.832885057Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 18:26:33 no-preload-729957 crio[647]: time="2025-10-18T18:26:33.87785449Z" level=info msg="Created container f8531bead1ef2e3eada16cba59c589956dc484ab1cff2f47a411a5f3dbd97427: kube-system/storage-provisioner/storage-provisioner" id=effb10e4-e923-42fd-89bf-f4d99794a40a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 18:26:33 no-preload-729957 crio[647]: time="2025-10-18T18:26:33.879509871Z" level=info msg="Starting container: f8531bead1ef2e3eada16cba59c589956dc484ab1cff2f47a411a5f3dbd97427" id=bae67397-a492-4ee0-b3e6-9fa8a11526cb name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 18:26:33 no-preload-729957 crio[647]: time="2025-10-18T18:26:33.888000746Z" level=info msg="Started container" PID=1631 containerID=f8531bead1ef2e3eada16cba59c589956dc484ab1cff2f47a411a5f3dbd97427 description=kube-system/storage-provisioner/storage-provisioner id=bae67397-a492-4ee0-b3e6-9fa8a11526cb name=/runtime.v1.RuntimeService/StartContainer sandboxID=e61150382859edd233bddf3dac5345409f9f681c0873b33012c2409ff14a3372
	Oct 18 18:26:43 no-preload-729957 crio[647]: time="2025-10-18T18:26:43.412029114Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:26:43 no-preload-729957 crio[647]: time="2025-10-18T18:26:43.418417201Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:26:43 no-preload-729957 crio[647]: time="2025-10-18T18:26:43.418454584Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:26:43 no-preload-729957 crio[647]: time="2025-10-18T18:26:43.418477378Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:26:43 no-preload-729957 crio[647]: time="2025-10-18T18:26:43.421731327Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:26:43 no-preload-729957 crio[647]: time="2025-10-18T18:26:43.421772641Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:26:43 no-preload-729957 crio[647]: time="2025-10-18T18:26:43.421796099Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:26:43 no-preload-729957 crio[647]: time="2025-10-18T18:26:43.425093077Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:26:43 no-preload-729957 crio[647]: time="2025-10-18T18:26:43.425128992Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 18:26:43 no-preload-729957 crio[647]: time="2025-10-18T18:26:43.425153312Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 18:26:43 no-preload-729957 crio[647]: time="2025-10-18T18:26:43.428047864Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 18:26:43 no-preload-729957 crio[647]: time="2025-10-18T18:26:43.428080554Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f8531bead1ef2       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           19 seconds ago       Running             storage-provisioner         2                   e61150382859e       storage-provisioner                          kube-system
	471127644b325       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago       Exited              dashboard-metrics-scraper   2                   3a4d29ccc9199       dashboard-metrics-scraper-6ffb444bf9-2jw6d   kubernetes-dashboard
	7624f6abd4598       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   35 seconds ago       Running             kubernetes-dashboard        0                   7333b2aa64d8d       kubernetes-dashboard-855c9754f9-dq5cz        kubernetes-dashboard
	0fa470fa642ab       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           50 seconds ago       Exited              storage-provisioner         1                   e61150382859e       storage-provisioner                          kube-system
	5f5f900bb1fe2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           50 seconds ago       Running             coredns                     1                   6e2d96f08ec51       coredns-66bc5c9577-q7mng                     kube-system
	8cbe09ef34bf4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago       Running             kindnet-cni                 1                   fb051ff6319a9       kindnet-4hbt7                                kube-system
	2a3d029247f0f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           50 seconds ago       Running             kube-proxy                  1                   9523085770ded       kube-proxy-75znn                             kube-system
	fc5063529049a       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago       Running             busybox                     1                   642da68c76e5a       busybox                                      default
	a51a8b9c45aa1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   ac5622478a1f2       kube-apiserver-no-preload-729957             kube-system
	6974399a43a07       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   c31990e6c5794       kube-controller-manager-no-preload-729957    kube-system
	d2a6df964e5a2       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   e24d20275da4a       kube-scheduler-no-preload-729957             kube-system
	b42f50a512a46       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   fba33013edfbf       etcd-no-preload-729957                       kube-system
	
	
	==> coredns [5f5f900bb1fe229f3538acdd9a0c3aad246dff6301bfc684afdfd990ab97fe94] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57117 - 7152 "HINFO IN 3496716410604839321.5072036916165056099. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016380576s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-729957
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-729957
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8370839eae78ceaf8cbcc2d8b43d8334eb508404
	                    minikube.k8s.io/name=no-preload-729957
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T18_24_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 18:24:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-729957
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 18:26:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 18:26:31 +0000   Sat, 18 Oct 2025 18:24:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 18:26:31 +0000   Sat, 18 Oct 2025 18:24:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 18:26:31 +0000   Sat, 18 Oct 2025 18:24:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 18:26:31 +0000   Sat, 18 Oct 2025 18:25:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-729957
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                767ca1b7-c7ba-48aa-bccb-3679302b1946
	  Boot ID:                    8a1d6305-2994-4aa4-a4cc-6b62966b9918
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-q7mng                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     113s
	  kube-system                 etcd-no-preload-729957                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         118s
	  kube-system                 kindnet-4hbt7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-no-preload-729957              250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-no-preload-729957     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-75znn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-no-preload-729957              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-2jw6d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dq5cz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 112s                   kube-proxy       
	  Normal   Starting                 50s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node no-preload-729957 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node no-preload-729957 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node no-preload-729957 status is now: NodeHasSufficientPID
	  Normal   Starting                 119s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 119s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    118s                   kubelet          Node no-preload-729957 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     118s                   kubelet          Node no-preload-729957 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  118s                   kubelet          Node no-preload-729957 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           115s                   node-controller  Node no-preload-729957 event: Registered Node no-preload-729957 in Controller
	  Normal   NodeReady                97s                    kubelet          Node no-preload-729957 status is now: NodeReady
	  Normal   Starting                 61s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node no-preload-729957 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node no-preload-729957 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node no-preload-729957 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                    node-controller  Node no-preload-729957 event: Registered Node no-preload-729957 in Controller
	
	
	==> dmesg <==
	[ +25.128760] overlayfs: idmapped layers are currently not supported
	[Oct18 18:06] overlayfs: idmapped layers are currently not supported
	[Oct18 18:07] overlayfs: idmapped layers are currently not supported
	[Oct18 18:08] overlayfs: idmapped layers are currently not supported
	[Oct18 18:09] overlayfs: idmapped layers are currently not supported
	[Oct18 18:11] overlayfs: idmapped layers are currently not supported
	[Oct18 18:13] overlayfs: idmapped layers are currently not supported
	[ +30.969240] overlayfs: idmapped layers are currently not supported
	[Oct18 18:15] overlayfs: idmapped layers are currently not supported
	[Oct18 18:16] overlayfs: idmapped layers are currently not supported
	[Oct18 18:17] overlayfs: idmapped layers are currently not supported
	[ +23.167826] overlayfs: idmapped layers are currently not supported
	[Oct18 18:18] overlayfs: idmapped layers are currently not supported
	[ +38.509809] overlayfs: idmapped layers are currently not supported
	[Oct18 18:19] overlayfs: idmapped layers are currently not supported
	[Oct18 18:21] overlayfs: idmapped layers are currently not supported
	[Oct18 18:22] overlayfs: idmapped layers are currently not supported
	[Oct18 18:23] overlayfs: idmapped layers are currently not supported
	[ +30.822562] overlayfs: idmapped layers are currently not supported
	[Oct18 18:24] bpfilter: read fail -512
	[ +10.607871] overlayfs: idmapped layers are currently not supported
	[Oct18 18:25] overlayfs: idmapped layers are currently not supported
	[ +26.762544] overlayfs: idmapped layers are currently not supported
	[ +14.684259] overlayfs: idmapped layers are currently not supported
	[Oct18 18:26] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b42f50a512a46ed3a6cad329c67f5e35b5354a294a55db5944cfbd20dd29cbd2] <==
	{"level":"warn","ts":"2025-10-18T18:25:57.844141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:57.867449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:57.886017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:57.913571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:57.928995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:57.960198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:57.978332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:57.991503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.017317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.039726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.067010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.103789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.137399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.169435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.187073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.230324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.238317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.272472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.299936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.356103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T18:25:58.448285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49372","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T18:26:01.835593Z","caller":"traceutil/trace.go:172","msg":"trace[1702060539] transaction","detail":"{read_only:false; response_revision:511; number_of_response:1; }","duration":"134.592519ms","start":"2025-10-18T18:26:01.700878Z","end":"2025-10-18T18:26:01.835470Z","steps":["trace[1702060539] 'process raft request'  (duration: 88.333191ms)","trace[1702060539] 'compare'  (duration: 46.017183ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T18:26:02.704686Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.282726ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:pv-protection-controller\" limit:1 ","response":"range_response_count:1 size:693"}
	{"level":"info","ts":"2025-10-18T18:26:02.704753Z","caller":"traceutil/trace.go:172","msg":"trace[1533455430] range","detail":"{range_begin:/registry/clusterroles/system:controller:pv-protection-controller; range_end:; response_count:1; response_revision:530; }","duration":"100.383348ms","start":"2025-10-18T18:26:02.604355Z","end":"2025-10-18T18:26:02.704739Z","steps":["trace[1533455430] 'agreement among raft nodes before linearized reading'  (duration: 40.945651ms)","trace[1533455430] 'range keys from in-memory index tree'  (duration: 59.253711ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T18:26:02.705007Z","caller":"traceutil/trace.go:172","msg":"trace[941557549] transaction","detail":"{read_only:false; response_revision:531; number_of_response:1; }","duration":"100.679499ms","start":"2025-10-18T18:26:02.604315Z","end":"2025-10-18T18:26:02.704994Z","steps":["trace[941557549] 'process raft request'  (duration: 40.952437ms)","trace[941557549] 'compare'  (duration: 59.221267ms)"],"step_count":2}
	
	
	==> kernel <==
	 18:26:53 up  2:09,  0 user,  load average: 3.20, 3.35, 2.95
	Linux no-preload-729957 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8cbe09ef34bf4eef049fd6b1f047b6b1c569dbdbffc529ac4e6883ef231e2b93] <==
	I1018 18:26:03.114241       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 18:26:03.114853       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 18:26:03.114995       1 main.go:148] setting mtu 1500 for CNI 
	I1018 18:26:03.115014       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 18:26:03.115024       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T18:26:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 18:26:03.406740       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 18:26:03.406822       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 18:26:03.406854       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 18:26:03.407159       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 18:26:33.407353       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 18:26:33.407564       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 18:26:33.407647       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 18:26:33.407723       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 18:26:35.007154       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 18:26:35.007199       1 metrics.go:72] Registering metrics
	I1018 18:26:35.007284       1 controller.go:711] "Syncing nftables rules"
	I1018 18:26:43.411723       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 18:26:43.411777       1 main.go:301] handling current node
	I1018 18:26:53.407566       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 18:26:53.407620       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a51a8b9c45aa1dd947ab88c80db7d69a3ada1bf2e6ca00bc66384aaccb0ff136] <==
	I1018 18:26:00.644337       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 18:26:00.644412       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 18:26:00.644432       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 18:26:00.661021       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 18:26:00.661468       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 18:26:00.661484       1 policy_source.go:240] refreshing policies
	I1018 18:26:00.662569       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 18:26:00.662586       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 18:26:00.681932       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 18:26:00.682629       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 18:26:00.683126       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 18:26:00.686421       1 cache.go:39] Caches are synced for autoregister controller
	I1018 18:26:00.710314       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 18:26:00.799370       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1018 18:26:00.873899       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 18:26:00.894266       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 18:26:02.263916       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 18:26:02.462718       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 18:26:02.711944       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 18:26:02.796172       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 18:26:03.135284       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.56.83"}
	I1018 18:26:03.190900       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.35.104"}
	I1018 18:26:06.093292       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 18:26:06.143610       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 18:26:06.322897       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6974399a43a070944c3ef86eb0363ba4bca8f5c775d0d5143be212a028542142] <==
	I1018 18:26:05.697615       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 18:26:05.700360       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 18:26:05.700389       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 18:26:05.702565       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 18:26:05.711891       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 18:26:05.711994       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 18:26:05.711897       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 18:26:05.712056       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 18:26:05.712093       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 18:26:05.712104       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 18:26:05.712110       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 18:26:05.718155       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 18:26:05.719361       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 18:26:05.720485       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 18:26:05.721900       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 18:26:05.724482       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 18:26:05.724802       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 18:26:05.729222       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 18:26:05.731894       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 18:26:05.735837       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 18:26:05.736766       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 18:26:05.736799       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 18:26:05.741172       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 18:26:05.748766       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 18:26:05.759156       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	
	
	==> kube-proxy [2a3d029247f0f2961084715b69f7ac4e03f5bd09abdd133077b57c82216aefd4] <==
	I1018 18:26:03.109792       1 server_linux.go:53] "Using iptables proxy"
	I1018 18:26:03.219051       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 18:26:03.320186       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 18:26:03.320266       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 18:26:03.320346       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 18:26:03.344293       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 18:26:03.344423       1 server_linux.go:132] "Using iptables Proxier"
	I1018 18:26:03.348393       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 18:26:03.348766       1 server.go:527] "Version info" version="v1.34.1"
	I1018 18:26:03.349008       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:26:03.350330       1 config.go:200] "Starting service config controller"
	I1018 18:26:03.350388       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 18:26:03.350429       1 config.go:106] "Starting endpoint slice config controller"
	I1018 18:26:03.350455       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 18:26:03.350507       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 18:26:03.350534       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 18:26:03.351302       1 config.go:309] "Starting node config controller"
	I1018 18:26:03.351354       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 18:26:03.351382       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 18:26:03.450916       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 18:26:03.450925       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 18:26:03.450959       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d2a6df964e5a27a75360411f0fbe62d805660605d883656062b3e9b3c98ffc61] <==
	I1018 18:25:58.647753       1 serving.go:386] Generated self-signed cert in-memory
	I1018 18:26:02.188157       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 18:26:02.188266       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 18:26:02.209831       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 18:26:02.209961       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 18:26:02.209983       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 18:26:02.210010       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 18:26:02.213096       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:26:02.213110       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:26:02.213138       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:26:02.213144       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 18:26:02.311649       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 18:26:02.314976       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 18:26:02.315995       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 18 18:26:06 no-preload-729957 kubelet[765]: I1018 18:26:06.286107     765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c399e443-ef3f-4155-9f03-484901165b54-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-dq5cz\" (UID: \"c399e443-ef3f-4155-9f03-484901165b54\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dq5cz"
	Oct 18 18:26:06 no-preload-729957 kubelet[765]: I1018 18:26:06.286693     765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2mxv\" (UniqueName: \"kubernetes.io/projected/c399e443-ef3f-4155-9f03-484901165b54-kube-api-access-b2mxv\") pod \"kubernetes-dashboard-855c9754f9-dq5cz\" (UID: \"c399e443-ef3f-4155-9f03-484901165b54\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dq5cz"
	Oct 18 18:26:06 no-preload-729957 kubelet[765]: I1018 18:26:06.286800     765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pllsk\" (UniqueName: \"kubernetes.io/projected/6be4ae09-b8e8-4a46-8751-4d264f7697ab-kube-api-access-pllsk\") pod \"dashboard-metrics-scraper-6ffb444bf9-2jw6d\" (UID: \"6be4ae09-b8e8-4a46-8751-4d264f7697ab\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2jw6d"
	Oct 18 18:26:06 no-preload-729957 kubelet[765]: I1018 18:26:06.286890     765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6be4ae09-b8e8-4a46-8751-4d264f7697ab-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-2jw6d\" (UID: \"6be4ae09-b8e8-4a46-8751-4d264f7697ab\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2jw6d"
	Oct 18 18:26:06 no-preload-729957 kubelet[765]: W1018 18:26:06.594535     765 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673/crio-3a4d29ccc9199d180e601ab1694cef753061bd19651ca598ef308c874a2bb2ae WatchSource:0}: Error finding container 3a4d29ccc9199d180e601ab1694cef753061bd19651ca598ef308c874a2bb2ae: Status 404 returned error can't find the container with id 3a4d29ccc9199d180e601ab1694cef753061bd19651ca598ef308c874a2bb2ae
	Oct 18 18:26:06 no-preload-729957 kubelet[765]: W1018 18:26:06.611613     765 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/26cea4068f8df271decd5fca2af44d16fcce605ab26c19246830b355e9629673/crio-7333b2aa64d8d15687e3137bff890d9d49b81245b5fb3234dc547004b7d43f16 WatchSource:0}: Error finding container 7333b2aa64d8d15687e3137bff890d9d49b81245b5fb3234dc547004b7d43f16: Status 404 returned error can't find the container with id 7333b2aa64d8d15687e3137bff890d9d49b81245b5fb3234dc547004b7d43f16
	Oct 18 18:26:11 no-preload-729957 kubelet[765]: I1018 18:26:11.742471     765 scope.go:117] "RemoveContainer" containerID="c23437207413cefff51a6912eb55b1b0b5065130bf4655ded8d7862bd43595fd"
	Oct 18 18:26:12 no-preload-729957 kubelet[765]: I1018 18:26:12.747118     765 scope.go:117] "RemoveContainer" containerID="c23437207413cefff51a6912eb55b1b0b5065130bf4655ded8d7862bd43595fd"
	Oct 18 18:26:12 no-preload-729957 kubelet[765]: I1018 18:26:12.747410     765 scope.go:117] "RemoveContainer" containerID="746019b0ec9b8fbc991561dd0fafcc67d3592ed100d0dd188df361fec5595531"
	Oct 18 18:26:12 no-preload-729957 kubelet[765]: E1018 18:26:12.747551     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2jw6d_kubernetes-dashboard(6be4ae09-b8e8-4a46-8751-4d264f7697ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2jw6d" podUID="6be4ae09-b8e8-4a46-8751-4d264f7697ab"
	Oct 18 18:26:13 no-preload-729957 kubelet[765]: I1018 18:26:13.754716     765 scope.go:117] "RemoveContainer" containerID="746019b0ec9b8fbc991561dd0fafcc67d3592ed100d0dd188df361fec5595531"
	Oct 18 18:26:13 no-preload-729957 kubelet[765]: E1018 18:26:13.755312     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2jw6d_kubernetes-dashboard(6be4ae09-b8e8-4a46-8751-4d264f7697ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2jw6d" podUID="6be4ae09-b8e8-4a46-8751-4d264f7697ab"
	Oct 18 18:26:16 no-preload-729957 kubelet[765]: I1018 18:26:16.560154     765 scope.go:117] "RemoveContainer" containerID="746019b0ec9b8fbc991561dd0fafcc67d3592ed100d0dd188df361fec5595531"
	Oct 18 18:26:16 no-preload-729957 kubelet[765]: E1018 18:26:16.560930     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2jw6d_kubernetes-dashboard(6be4ae09-b8e8-4a46-8751-4d264f7697ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2jw6d" podUID="6be4ae09-b8e8-4a46-8751-4d264f7697ab"
	Oct 18 18:26:28 no-preload-729957 kubelet[765]: I1018 18:26:28.599439     765 scope.go:117] "RemoveContainer" containerID="746019b0ec9b8fbc991561dd0fafcc67d3592ed100d0dd188df361fec5595531"
	Oct 18 18:26:28 no-preload-729957 kubelet[765]: I1018 18:26:28.796203     765 scope.go:117] "RemoveContainer" containerID="746019b0ec9b8fbc991561dd0fafcc67d3592ed100d0dd188df361fec5595531"
	Oct 18 18:26:28 no-preload-729957 kubelet[765]: I1018 18:26:28.796486     765 scope.go:117] "RemoveContainer" containerID="471127644b325b85c5c10f6876205a690ae590a617ae0a3345a5d15788948065"
	Oct 18 18:26:28 no-preload-729957 kubelet[765]: E1018 18:26:28.796655     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2jw6d_kubernetes-dashboard(6be4ae09-b8e8-4a46-8751-4d264f7697ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2jw6d" podUID="6be4ae09-b8e8-4a46-8751-4d264f7697ab"
	Oct 18 18:26:28 no-preload-729957 kubelet[765]: I1018 18:26:28.839246     765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dq5cz" podStartSLOduration=11.340258849 podStartE2EDuration="22.839229132s" podCreationTimestamp="2025-10-18 18:26:06 +0000 UTC" firstStartedPulling="2025-10-18 18:26:06.615676469 +0000 UTC m=+14.330519737" lastFinishedPulling="2025-10-18 18:26:18.114646752 +0000 UTC m=+25.829490020" observedRunningTime="2025-10-18 18:26:18.789137653 +0000 UTC m=+26.503980937" watchObservedRunningTime="2025-10-18 18:26:28.839229132 +0000 UTC m=+36.554072400"
	Oct 18 18:26:33 no-preload-729957 kubelet[765]: I1018 18:26:33.812309     765 scope.go:117] "RemoveContainer" containerID="0fa470fa642abc5faf16ee6eb2a3332179be9e9bd3853405ee4a917524746026"
	Oct 18 18:26:36 no-preload-729957 kubelet[765]: I1018 18:26:36.560055     765 scope.go:117] "RemoveContainer" containerID="471127644b325b85c5c10f6876205a690ae590a617ae0a3345a5d15788948065"
	Oct 18 18:26:36 no-preload-729957 kubelet[765]: E1018 18:26:36.560235     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2jw6d_kubernetes-dashboard(6be4ae09-b8e8-4a46-8751-4d264f7697ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2jw6d" podUID="6be4ae09-b8e8-4a46-8751-4d264f7697ab"
	Oct 18 18:26:48 no-preload-729957 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 18:26:48 no-preload-729957 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 18:26:48 no-preload-729957 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [7624f6abd459809bd3046f1c044b4a4b33cb3de17198c331adda43e222af9966] <==
	2025/10/18 18:26:18 Using namespace: kubernetes-dashboard
	2025/10/18 18:26:18 Using in-cluster config to connect to apiserver
	2025/10/18 18:26:18 Using secret token for csrf signing
	2025/10/18 18:26:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 18:26:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 18:26:18 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 18:26:18 Generating JWE encryption key
	2025/10/18 18:26:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 18:26:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 18:26:19 Initializing JWE encryption key from synchronized object
	2025/10/18 18:26:19 Creating in-cluster Sidecar client
	2025/10/18 18:26:19 Serving insecurely on HTTP port: 9090
	2025/10/18 18:26:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 18:26:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 18:26:18 Starting overwatch
	
	
	==> storage-provisioner [0fa470fa642abc5faf16ee6eb2a3332179be9e9bd3853405ee4a917524746026] <==
	I1018 18:26:03.153983       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 18:26:33.159644       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f8531bead1ef2e3eada16cba59c589956dc484ab1cff2f47a411a5f3dbd97427] <==
	I1018 18:26:33.911050       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 18:26:33.944334       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 18:26:33.944452       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 18:26:33.948518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:26:37.416189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:26:41.677334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:26:45.280707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:26:48.335851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:26:51.358286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:26:51.363997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 18:26:51.364182       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 18:26:51.364429       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-729957_748a7a91-4c2e-4054-994d-961dc505ea48!
	I1018 18:26:51.366023       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"707a2d03-df04-488e-b561-b69c9acdb2d6", APIVersion:"v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-729957_748a7a91-4c2e-4054-994d-961dc505ea48 became leader
	W1018 18:26:51.373892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:26:51.378472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 18:26:51.464852       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-729957_748a7a91-4c2e-4054-994d-961dc505ea48!
	W1018 18:26:53.382466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 18:26:53.388229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-729957 -n no-preload-729957
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-729957 -n no-preload-729957: exit status 2 (389.44364ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-729957 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.28s)
E1018 18:32:45.803338    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:32:56.803373    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/default-k8s-diff-port-192562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (251/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.16
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.57
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 7.03
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 169.54
31 TestAddons/serial/GCPAuth/Namespaces 0.21
32 TestAddons/serial/GCPAuth/FakeCredentials 9.25
48 TestAddons/StoppedEnableDisable 12.46
49 TestCertOptions 40.55
50 TestCertExpiration 253.41
52 TestForceSystemdFlag 37.69
53 TestForceSystemdEnv 42.08
59 TestErrorSpam/setup 33.63
60 TestErrorSpam/start 0.77
61 TestErrorSpam/status 1.13
62 TestErrorSpam/pause 6.34
63 TestErrorSpam/unpause 5.47
64 TestErrorSpam/stop 1.57
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 78.81
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 41.26
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.11
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.52
76 TestFunctional/serial/CacheCmd/cache/add_local 1.07
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.82
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.15
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
84 TestFunctional/serial/ExtraConfig 31.49
85 TestFunctional/serial/ComponentHealth 0.11
86 TestFunctional/serial/LogsCmd 1.46
87 TestFunctional/serial/LogsFileCmd 1.48
88 TestFunctional/serial/InvalidService 4.4
90 TestFunctional/parallel/ConfigCmd 0.51
91 TestFunctional/parallel/DashboardCmd 13.07
92 TestFunctional/parallel/DryRun 0.44
93 TestFunctional/parallel/InternationalLanguage 0.21
94 TestFunctional/parallel/StatusCmd 1.04
99 TestFunctional/parallel/AddonsCmd 0.16
100 TestFunctional/parallel/PersistentVolumeClaim 24.41
102 TestFunctional/parallel/SSHCmd 0.71
103 TestFunctional/parallel/CpCmd 2.15
105 TestFunctional/parallel/FileSync 0.29
106 TestFunctional/parallel/CertSync 1.7
110 TestFunctional/parallel/NodeLabels 0.1
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.7
114 TestFunctional/parallel/License 0.38
115 TestFunctional/parallel/Version/short 0.08
116 TestFunctional/parallel/Version/components 1.38
117 TestFunctional/parallel/ImageCommands/ImageListShort 1.33
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.34
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
121 TestFunctional/parallel/ImageCommands/ImageBuild 4.11
122 TestFunctional/parallel/ImageCommands/Setup 0.76
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.66
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.44
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
141 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
142 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
143 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
146 TestFunctional/parallel/ProfileCmd/profile_list 0.41
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
148 TestFunctional/parallel/MountCmd/any-port 7.13
149 TestFunctional/parallel/MountCmd/specific-port 2.21
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.73
151 TestFunctional/parallel/ServiceCmd/List 0.6
152 TestFunctional/parallel/ServiceCmd/JSONOutput 0.64
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.03
163 TestMultiControlPlane/serial/StartCluster 174.36
164 TestMultiControlPlane/serial/DeployApp 7.26
165 TestMultiControlPlane/serial/PingHostFromPods 1.45
166 TestMultiControlPlane/serial/AddWorkerNode 59.53
167 TestMultiControlPlane/serial/NodeLabels 0.1
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.04
169 TestMultiControlPlane/serial/CopyFile 20.1
170 TestMultiControlPlane/serial/StopSecondaryNode 12.9
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
172 TestMultiControlPlane/serial/RestartSecondaryNode 109.26
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.28
185 TestJSONOutput/start/Command 84.55
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.84
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 48.7
211 TestKicCustomNetwork/use_default_bridge_network 37.03
212 TestKicExistingNetwork 37.42
213 TestKicCustomSubnet 36.31
214 TestKicStaticIP 36.5
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 67.34
219 TestMountStart/serial/StartWithMountFirst 11.05
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 10.22
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.72
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 7.95
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 138.33
231 TestMultiNode/serial/DeployApp2Nodes 7.3
232 TestMultiNode/serial/PingHostFrom2Pods 0.87
233 TestMultiNode/serial/AddNode 59.97
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.76
236 TestMultiNode/serial/CopyFile 10.42
237 TestMultiNode/serial/StopNode 2.4
238 TestMultiNode/serial/StartAfterStop 8.17
239 TestMultiNode/serial/RestartKeepsNodes 79.22
240 TestMultiNode/serial/DeleteNode 5.67
241 TestMultiNode/serial/StopMultiNode 23.99
242 TestMultiNode/serial/RestartMultiNode 53.08
243 TestMultiNode/serial/ValidateNameConflict 36.79
248 TestPreload 127.12
250 TestScheduledStopUnix 106.77
253 TestInsufficientStorage 14.57
254 TestRunningBinaryUpgrade 51.76
256 TestKubernetesUpgrade 208.39
257 TestMissingContainerUpgrade 121.71
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 45.07
261 TestNoKubernetes/serial/StartWithStopK8s 26.34
262 TestNoKubernetes/serial/Start 9.53
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
264 TestNoKubernetes/serial/ProfileList 0.7
265 TestNoKubernetes/serial/Stop 1.31
266 TestNoKubernetes/serial/StartNoArgs 8.25
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.45
268 TestStoppedBinaryUpgrade/Setup 1
269 TestStoppedBinaryUpgrade/Upgrade 67.85
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.23
279 TestPause/serial/Start 94.09
280 TestPause/serial/SecondStartNoReconfiguration 28.64
288 TestNetworkPlugins/group/false 3.87
294 TestStartStop/group/old-k8s-version/serial/FirstStart 63.89
295 TestStartStop/group/old-k8s-version/serial/DeployApp 10.43
297 TestStartStop/group/old-k8s-version/serial/Stop 11.98
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
299 TestStartStop/group/old-k8s-version/serial/SecondStart 52.07
300 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
301 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
302 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 86.84
307 TestStartStop/group/embed-certs/serial/FirstStart 84.62
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.35
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.04
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
312 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 55.88
313 TestStartStop/group/embed-certs/serial/DeployApp 9.37
315 TestStartStop/group/embed-certs/serial/Stop 12.78
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
317 TestStartStop/group/embed-certs/serial/SecondStart 58.4
318 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
319 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.2
320 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.54
323 TestStartStop/group/no-preload/serial/FirstStart 67.94
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.35
329 TestStartStop/group/newest-cni/serial/FirstStart 40.28
330 TestStartStop/group/no-preload/serial/DeployApp 9.38
331 TestStartStop/group/newest-cni/serial/DeployApp 0
334 TestStartStop/group/newest-cni/serial/Stop 1.6
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
336 TestStartStop/group/newest-cni/serial/SecondStart 16.31
337 TestStartStop/group/no-preload/serial/Stop 12.24
338 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.32
339 TestStartStop/group/no-preload/serial/SecondStart 52.67
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
344 TestNetworkPlugins/group/auto/Start 87.11
345 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
346 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
347 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
349 TestNetworkPlugins/group/kindnet/Start 86.75
350 TestNetworkPlugins/group/auto/KubeletFlags 0.34
351 TestNetworkPlugins/group/auto/NetCatPod 12.37
352 TestNetworkPlugins/group/auto/DNS 0.19
353 TestNetworkPlugins/group/auto/Localhost 0.16
354 TestNetworkPlugins/group/auto/HairPin 0.14
355 TestNetworkPlugins/group/calico/Start 62.08
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.39
358 TestNetworkPlugins/group/kindnet/NetCatPod 12.34
359 TestNetworkPlugins/group/kindnet/DNS 0.22
360 TestNetworkPlugins/group/kindnet/Localhost 0.23
361 TestNetworkPlugins/group/kindnet/HairPin 0.22
362 TestNetworkPlugins/group/calico/ControllerPod 6.01
363 TestNetworkPlugins/group/calico/KubeletFlags 0.35
364 TestNetworkPlugins/group/calico/NetCatPod 11.37
365 TestNetworkPlugins/group/custom-flannel/Start 65.59
366 TestNetworkPlugins/group/calico/DNS 0.19
367 TestNetworkPlugins/group/calico/Localhost 0.19
368 TestNetworkPlugins/group/calico/HairPin 0.16
369 TestNetworkPlugins/group/enable-default-cni/Start 75.91
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.46
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.44
372 TestNetworkPlugins/group/custom-flannel/DNS 0.25
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
375 TestNetworkPlugins/group/flannel/Start 65.02
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.42
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.39
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
381 TestNetworkPlugins/group/bridge/Start 80.99
382 TestNetworkPlugins/group/flannel/ControllerPod 6.06
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.54
384 TestNetworkPlugins/group/flannel/NetCatPod 12.34
385 TestNetworkPlugins/group/flannel/DNS 0.18
386 TestNetworkPlugins/group/flannel/Localhost 0.17
387 TestNetworkPlugins/group/flannel/HairPin 0.17
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
389 TestNetworkPlugins/group/bridge/NetCatPod 10.3
390 TestNetworkPlugins/group/bridge/DNS 0.16
391 TestNetworkPlugins/group/bridge/Localhost 0.18
392 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (6.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-158495 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-158495 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.157688768s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1018 17:11:59.900207    4320 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1018 17:11:59.900284    4320 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-158495
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-158495: exit status 85 (86.58148ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-158495 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-158495 │ jenkins │ v1.37.0 │ 18 Oct 25 17:11 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 17:11:53
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 17:11:53.785275    4325 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:11:53.785491    4325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:11:53.785516    4325 out.go:374] Setting ErrFile to fd 2...
	I1018 17:11:53.785538    4325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:11:53.785835    4325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	W1018 17:11:53.786004    4325 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21409-2509/.minikube/config/config.json: open /home/jenkins/minikube-integration/21409-2509/.minikube/config/config.json: no such file or directory
	I1018 17:11:53.786510    4325 out.go:368] Setting JSON to true
	I1018 17:11:53.787338    4325 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3263,"bootTime":1760804251,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 17:11:53.787432    4325 start.go:141] virtualization:  
	I1018 17:11:53.791503    4325 out.go:99] [download-only-158495] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1018 17:11:53.791686    4325 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball: no such file or directory
	I1018 17:11:53.791804    4325 notify.go:220] Checking for updates...
	I1018 17:11:53.795188    4325 out.go:171] MINIKUBE_LOCATION=21409
	I1018 17:11:53.798180    4325 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 17:11:53.801113    4325 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:11:53.804098    4325 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 17:11:53.806977    4325 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1018 17:11:53.812463    4325 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 17:11:53.812739    4325 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 17:11:53.839423    4325 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 17:11:53.839533    4325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:11:54.247983    4325 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-18 17:11:54.238257039 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:11:54.248089    4325 docker.go:318] overlay module found
	I1018 17:11:54.251241    4325 out.go:99] Using the docker driver based on user configuration
	I1018 17:11:54.251286    4325 start.go:305] selected driver: docker
	I1018 17:11:54.251299    4325 start.go:925] validating driver "docker" against <nil>
	I1018 17:11:54.251401    4325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:11:54.309633    4325 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-18 17:11:54.300923806 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:11:54.309796    4325 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 17:11:54.310062    4325 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1018 17:11:54.310226    4325 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 17:11:54.313287    4325 out.go:171] Using Docker driver with root privileges
	I1018 17:11:54.316346    4325 cni.go:84] Creating CNI manager for ""
	I1018 17:11:54.316414    4325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 17:11:54.316428    4325 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 17:11:54.316509    4325 start.go:349] cluster config:
	{Name:download-only-158495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-158495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:11:54.319335    4325 out.go:99] Starting "download-only-158495" primary control-plane node in "download-only-158495" cluster
	I1018 17:11:54.319377    4325 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:11:54.322198    4325 out.go:99] Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:11:54.322232    4325 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 17:11:54.322397    4325 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:11:54.336594    4325 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 17:11:54.336788    4325 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 17:11:54.336893    4325 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 17:11:54.378128    4325 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1018 17:11:54.378171    4325 cache.go:58] Caching tarball of preloaded images
	I1018 17:11:54.378324    4325 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 17:11:54.381821    4325 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1018 17:11:54.381848    4325 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1018 17:11:54.471788    4325 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1018 17:11:54.471914    4325 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-158495 host does not exist
	  To start a cluster, run: "minikube start -p download-only-158495"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-158495
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (7.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-339428 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-339428 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.03177003s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (7.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1018 17:12:07.742464    4320 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1018 17:12:07.742509    4320 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-339428
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-339428: exit status 85 (86.617033ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-158495 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-158495 │ jenkins │ v1.37.0 │ 18 Oct 25 17:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 18 Oct 25 17:12 UTC │ 18 Oct 25 17:12 UTC │
	│ delete  │ -p download-only-158495                                                                                                                                                   │ download-only-158495 │ jenkins │ v1.37.0 │ 18 Oct 25 17:12 UTC │ 18 Oct 25 17:12 UTC │
	│ start   │ -o=json --download-only -p download-only-339428 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-339428 │ jenkins │ v1.37.0 │ 18 Oct 25 17:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 17:12:00
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 17:12:00.752252    4521 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:12:00.752442    4521 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:12:00.752473    4521 out.go:374] Setting ErrFile to fd 2...
	I1018 17:12:00.752491    4521 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:12:00.752760    4521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:12:00.753228    4521 out.go:368] Setting JSON to true
	I1018 17:12:00.753966    4521 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3270,"bootTime":1760804251,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 17:12:00.754061    4521 start.go:141] virtualization:  
	I1018 17:12:00.757646    4521 out.go:99] [download-only-339428] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 17:12:00.757914    4521 notify.go:220] Checking for updates...
	I1018 17:12:00.761076    4521 out.go:171] MINIKUBE_LOCATION=21409
	I1018 17:12:00.764156    4521 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 17:12:00.767370    4521 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:12:00.770418    4521 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 17:12:00.773353    4521 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1018 17:12:00.779011    4521 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 17:12:00.779360    4521 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 17:12:00.807174    4521 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 17:12:00.807281    4521 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:12:00.878588    4521 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:51 SystemTime:2025-10-18 17:12:00.869122306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:12:00.878706    4521 docker.go:318] overlay module found
	I1018 17:12:00.881808    4521 out.go:99] Using the docker driver based on user configuration
	I1018 17:12:00.881849    4521 start.go:305] selected driver: docker
	I1018 17:12:00.881855    4521 start.go:925] validating driver "docker" against <nil>
	I1018 17:12:00.881968    4521 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:12:00.943704    4521 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:51 SystemTime:2025-10-18 17:12:00.93511291 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:12:00.943862    4521 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 17:12:00.944126    4521 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1018 17:12:00.944276    4521 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 17:12:00.947434    4521 out.go:171] Using Docker driver with root privileges
	I1018 17:12:00.950262    4521 cni.go:84] Creating CNI manager for ""
	I1018 17:12:00.950334    4521 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 17:12:00.950347    4521 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 17:12:00.950426    4521 start.go:349] cluster config:
	{Name:download-only-339428 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-339428 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:12:00.953394    4521 out.go:99] Starting "download-only-339428" primary control-plane node in "download-only-339428" cluster
	I1018 17:12:00.953431    4521 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 17:12:00.956444    4521 out.go:99] Pulling base image v0.0.48-1760609789-21757 ...
	I1018 17:12:00.956468    4521 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:12:00.956574    4521 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 17:12:00.972465    4521 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 17:12:00.972601    4521 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 17:12:00.972623    4521 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 17:12:00.972628    4521 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 17:12:00.972639    4521 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 17:12:01.017255    4521 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 17:12:01.017287    4521 cache.go:58] Caching tarball of preloaded images
	I1018 17:12:01.017478    4521 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 17:12:01.020795    4521 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1018 17:12:01.020836    4521 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1018 17:12:01.113014    4521 preload.go:290] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1018 17:12:01.113068    4521 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21409-2509/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-339428 host does not exist
	  To start a cluster, run: "minikube start -p download-only-339428"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-339428
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1018 17:12:08.895044    4320 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-644672 --alsologtostderr --binary-mirror http://127.0.0.1:44133 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-644672" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-644672
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-164474
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-164474: exit status 85 (71.6275ms)

                                                
                                                
-- stdout --
	* Profile "addons-164474" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-164474"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-164474
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-164474: exit status 85 (83.255297ms)

                                                
                                                
-- stdout --
	* Profile "addons-164474" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-164474"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (169.54s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-164474 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-164474 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m49.540580273s)
--- PASS: TestAddons/Setup (169.54s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-164474 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-164474 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.25s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-164474 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-164474 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a212ae5b-eb4f-4f94-a0e8-d10307a75f8b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a212ae5b-eb4f-4f94-a0e8-d10307a75f8b] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.00371988s
addons_test.go:694: (dbg) Run:  kubectl --context addons-164474 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-164474 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-164474 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-164474 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.25s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.46s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-164474
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-164474: (12.193028208s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-164474
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-164474
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-164474
--- PASS: TestAddons/StoppedEnableDisable (12.46s)

                                                
                                    
x
+
TestCertOptions (40.55s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-327418 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-327418 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (37.70668013s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-327418 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-327418 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-327418 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-327418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-327418
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-327418: (2.109226122s)
--- PASS: TestCertOptions (40.55s)

                                                
                                    
x
+
TestCertExpiration (253.41s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-463770 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-463770 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (40.1589135s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-463770 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-463770 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (30.151953834s)
helpers_test.go:175: Cleaning up "cert-expiration-463770" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-463770
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-463770: (3.103350288s)
--- PASS: TestCertExpiration (253.41s)

                                                
                                    
x
+
TestForceSystemdFlag (37.69s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-837300 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-837300 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (34.605650496s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-837300 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-837300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-837300
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-837300: (2.757861425s)
--- PASS: TestForceSystemdFlag (37.69s)

                                                
                                    
x
+
TestForceSystemdEnv (42.08s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-785999 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1018 18:17:03.655211    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-785999 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.199478371s)
helpers_test.go:175: Cleaning up "force-systemd-env-785999" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-785999
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-785999: (2.883829873s)
--- PASS: TestForceSystemdEnv (42.08s)

                                                
                                    
x
+
TestErrorSpam/setup (33.63s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-153160 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-153160 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-153160 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-153160 --driver=docker  --container-runtime=crio: (33.63210504s)
--- PASS: TestErrorSpam/setup (33.63s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (6.34s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 pause: exit status 80 (2.260170008s)

                                                
                                                
-- stdout --
	* Pausing node nospam-153160 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:18:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 pause: exit status 80 (2.453294288s)

                                                
                                                
-- stdout --
	* Pausing node nospam-153160 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:19:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 pause: exit status 80 (1.621927734s)

                                                
                                                
-- stdout --
	* Pausing node nospam-153160 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:19:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.34s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.47s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 unpause: exit status 80 (1.85426193s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-153160 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:19:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 unpause: exit status 80 (1.950106468s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-153160 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:19:06Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 unpause: exit status 80 (1.66055486s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-153160 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T17:19:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.47s)

                                                
                                    
x
+
TestErrorSpam/stop (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 stop: (1.365887338s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-153160 --log_dir /tmp/nospam-153160 stop
--- PASS: TestErrorSpam/stop (1.57s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21409-2509/.minikube/files/etc/test/nested/copy/4320/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.81s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-306136 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1018 17:20:00.540563    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:20:00.547380    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:20:00.559584    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:20:00.581121    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:20:00.622642    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:20:00.704265    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:20:00.865891    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:20:01.187842    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:20:01.829865    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:20:03.111512    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:20:05.674361    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:20:10.796481    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:20:21.038712    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-306136 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m18.805258093s)
--- PASS: TestFunctional/serial/StartWithProxy (78.81s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1018 17:20:33.084751    4320 config.go:182] Loaded profile config "functional-306136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-306136 --alsologtostderr -v=8
E1018 17:20:41.520575    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-306136 --alsologtostderr -v=8: (41.259996994s)
functional_test.go:678: soft start took 41.261706835s for "functional-306136" cluster.
I1018 17:21:14.345105    4320 config.go:182] Loaded profile config "functional-306136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (41.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-306136 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-306136 cache add registry.k8s.io/pause:3.1: (1.172160474s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-306136 cache add registry.k8s.io/pause:3.3: (1.214504204s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-306136 cache add registry.k8s.io/pause:latest: (1.13715568s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-306136 /tmp/TestFunctionalserialCacheCmdcacheadd_local762093545/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 cache add minikube-local-cache-test:functional-306136
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 cache delete minikube-local-cache-test:functional-306136
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-306136
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-306136 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (301.921412ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 kubectl -- --context functional-306136 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-306136 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.49s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-306136 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1018 17:21:22.482287    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-306136 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.486725319s)
functional_test.go:776: restart took 31.486817873s for "functional-306136" cluster.
I1018 17:21:53.230116    4320 config.go:182] Loaded profile config "functional-306136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (31.49s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-306136 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-306136 logs: (1.461121831s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 logs --file /tmp/TestFunctionalserialLogsFileCmd2361090227/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-306136 logs --file /tmp/TestFunctionalserialLogsFileCmd2361090227/001/logs.txt: (1.477723609s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.4s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-306136 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-306136
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-306136: exit status 115 (386.385176ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31849 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-306136 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.40s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-306136 config get cpus: exit status 14 (89.533987ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-306136 config get cpus: exit status 14 (88.063369ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-306136 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-306136 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 31947: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.07s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-306136 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-306136 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (191.48374ms)

                                                
                                                
-- stdout --
	* [functional-306136] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 17:32:33.587317   31477 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:32:33.587483   31477 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:32:33.587494   31477 out.go:374] Setting ErrFile to fd 2...
	I1018 17:32:33.587499   31477 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:32:33.587789   31477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:32:33.588145   31477 out.go:368] Setting JSON to false
	I1018 17:32:33.589961   31477 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4503,"bootTime":1760804251,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 17:32:33.590035   31477 start.go:141] virtualization:  
	I1018 17:32:33.593357   31477 out.go:179] * [functional-306136] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 17:32:33.597187   31477 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 17:32:33.597318   31477 notify.go:220] Checking for updates...
	I1018 17:32:33.603015   31477 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 17:32:33.605922   31477 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:32:33.608711   31477 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 17:32:33.611437   31477 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 17:32:33.614312   31477 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 17:32:33.617603   31477 config.go:182] Loaded profile config "functional-306136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:32:33.618257   31477 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 17:32:33.641635   31477 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 17:32:33.641745   31477 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:32:33.707955   31477 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 17:32:33.698810315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:32:33.708059   31477 docker.go:318] overlay module found
	I1018 17:32:33.711237   31477 out.go:179] * Using the docker driver based on existing profile
	I1018 17:32:33.713991   31477 start.go:305] selected driver: docker
	I1018 17:32:33.714019   31477 start.go:925] validating driver "docker" against &{Name:functional-306136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-306136 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:32:33.714113   31477 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 17:32:33.717773   31477 out.go:203] 
	W1018 17:32:33.720666   31477 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1018 17:32:33.723509   31477 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-306136 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-306136 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-306136 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (210.659018ms)

                                                
                                                
-- stdout --
	* [functional-306136] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 17:32:33.381870   31429 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:32:33.382105   31429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:32:33.382118   31429 out.go:374] Setting ErrFile to fd 2...
	I1018 17:32:33.382122   31429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:32:33.383613   31429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:32:33.384082   31429 out.go:368] Setting JSON to false
	I1018 17:32:33.385034   31429 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4503,"bootTime":1760804251,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 17:32:33.385113   31429 start.go:141] virtualization:  
	I1018 17:32:33.389862   31429 out.go:179] * [functional-306136] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1018 17:32:33.392911   31429 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 17:32:33.393057   31429 notify.go:220] Checking for updates...
	I1018 17:32:33.398876   31429 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 17:32:33.401903   31429 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 17:32:33.404685   31429 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 17:32:33.407473   31429 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 17:32:33.410274   31429 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 17:32:33.413589   31429 config.go:182] Loaded profile config "functional-306136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:32:33.414215   31429 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 17:32:33.452445   31429 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 17:32:33.452583   31429 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:32:33.514250   31429 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 17:32:33.50392243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:32:33.514363   31429 docker.go:318] overlay module found
	I1018 17:32:33.517555   31429 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1018 17:32:33.520379   31429 start.go:305] selected driver: docker
	I1018 17:32:33.520399   31429 start.go:925] validating driver "docker" against &{Name:functional-306136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-306136 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 17:32:33.520498   31429 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 17:32:33.524044   31429 out.go:203] 
	W1018 17:32:33.527782   31429 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1018 17:32:33.530617   31429 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [f921b3a5-63a0-4ac8-b575-51756da2bc07] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004337088s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-306136 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-306136 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-306136 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-306136 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [51735653-ff69-44a2-bfc9-b2071edb4ecf] Pending
helpers_test.go:352: "sp-pod" [51735653-ff69-44a2-bfc9-b2071edb4ecf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [51735653-ff69-44a2-bfc9-b2071edb4ecf] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003007579s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-306136 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-306136 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-306136 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [dbd8fa5a-b54c-4437-9a29-f1333a62e11e] Pending
helpers_test.go:352: "sp-pod" [dbd8fa5a-b54c-4437-9a29-f1333a62e11e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003317241s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-306136 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.41s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh -n functional-306136 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 cp functional-306136:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1321492752/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh -n functional-306136 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh -n functional-306136 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4320/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh "sudo cat /etc/test/nested/copy/4320/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4320.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh "sudo cat /etc/ssl/certs/4320.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4320.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh "sudo cat /usr/share/ca-certificates/4320.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/43202.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh "sudo cat /etc/ssl/certs/43202.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/43202.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh "sudo cat /usr/share/ca-certificates/43202.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-306136 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-306136 ssh "sudo systemctl is-active docker": exit status 1 (362.410559ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-306136 ssh "sudo systemctl is-active containerd": exit status 1 (340.383975ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-306136 version -o=json --components: (1.382310478s)
--- PASS: TestFunctional/parallel/Version/components (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-arm64 -p functional-306136 image ls --format short --alsologtostderr: (1.334685189s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-306136 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-306136 image ls --format short --alsologtostderr:
I1018 17:32:42.438112   32873 out.go:360] Setting OutFile to fd 1 ...
I1018 17:32:42.438423   32873 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 17:32:42.438456   32873 out.go:374] Setting ErrFile to fd 2...
I1018 17:32:42.438476   32873 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 17:32:42.438769   32873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
I1018 17:32:42.439451   32873 config.go:182] Loaded profile config "functional-306136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 17:32:42.439619   32873 config.go:182] Loaded profile config "functional-306136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 17:32:42.440217   32873 cli_runner.go:164] Run: docker container inspect functional-306136 --format={{.State.Status}}
I1018 17:32:42.460240   32873 ssh_runner.go:195] Run: systemctl --version
I1018 17:32:42.460298   32873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-306136
I1018 17:32:42.482762   32873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/functional-306136/id_rsa Username:docker}
I1018 17:32:42.594207   32873 ssh_runner.go:195] Run: sudo crictl images --output json
I1018 17:32:43.694072   32873 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.099782652s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-306136 image ls --format json --alsologtostderr:
[{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa","repoDigests":["docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0","docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54704654"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11af
cf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["reg
istry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d4
31fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a87344de7237dc99ef85d6c3ddc7e6cd5b4176603e93ed3e74b375f0aa921a3b","repoDigests":[],"repoTags":[],"size":"1638178"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha2
56:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTa
gs":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9","repoDigests":["docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6","docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a"],"repoTags":["docker.io/library/nginx:latest"],"size":"184136558"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-p
rovisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-306136 image ls --format json --alsologtostderr:
I1018 17:32:47.118746   33178 out.go:360] Setting OutFile to fd 1 ...
I1018 17:32:47.121434   33178 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 17:32:47.121450   33178 out.go:374] Setting ErrFile to fd 2...
I1018 17:32:47.121456   33178 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 17:32:47.122709   33178 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
I1018 17:32:47.123888   33178 config.go:182] Loaded profile config "functional-306136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 17:32:47.124178   33178 config.go:182] Loaded profile config "functional-306136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 17:32:47.124823   33178 cli_runner.go:164] Run: docker container inspect functional-306136 --format={{.State.Status}}
I1018 17:32:47.149557   33178 ssh_runner.go:195] Run: systemctl --version
I1018 17:32:47.149629   33178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-306136
I1018 17:32:47.169826   33178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/functional-306136/id_rsa Username:docker}
I1018 17:32:47.280571   33178 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-306136 image ls --format yaml --alsologtostderr:
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa
repoDigests:
- docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
repoTags:
- docker.io/library/nginx:alpine
size: "54704654"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9
repoDigests:
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
- docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a
repoTags:
- docker.io/library/nginx:latest
size: "184136558"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-306136 image ls --format yaml --alsologtostderr:
I1018 17:32:43.761300   32920 out.go:360] Setting OutFile to fd 1 ...
I1018 17:32:43.761451   32920 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 17:32:43.761464   32920 out.go:374] Setting ErrFile to fd 2...
I1018 17:32:43.761469   32920 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 17:32:43.761748   32920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
I1018 17:32:43.762378   32920 config.go:182] Loaded profile config "functional-306136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 17:32:43.762497   32920 config.go:182] Loaded profile config "functional-306136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 17:32:43.763670   32920 cli_runner.go:164] Run: docker container inspect functional-306136 --format={{.State.Status}}
I1018 17:32:43.784013   32920 ssh_runner.go:195] Run: systemctl --version
I1018 17:32:43.784115   32920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-306136
I1018 17:32:43.810804   32920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/functional-306136/id_rsa Username:docker}
I1018 17:32:43.919392   32920 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-306136 ssh pgrep buildkitd: exit status 1 (393.75987ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 image build -t localhost/my-image:functional-306136 testdata/build --alsologtostderr
2025/10/18 17:32:46 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-306136 image build -t localhost/my-image:functional-306136 testdata/build --alsologtostderr: (3.419146618s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-306136 image build -t localhost/my-image:functional-306136 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> a87344de723
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-306136
--> 78d83d981d2
Successfully tagged localhost/my-image:functional-306136
78d83d981d28e507977e0a724615d827de35ac85de15f9464d6390caa11eda16
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-306136 image build -t localhost/my-image:functional-306136 testdata/build --alsologtostderr:
I1018 17:32:44.415270   33034 out.go:360] Setting OutFile to fd 1 ...
I1018 17:32:44.415487   33034 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 17:32:44.415493   33034 out.go:374] Setting ErrFile to fd 2...
I1018 17:32:44.415499   33034 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 17:32:44.415736   33034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
I1018 17:32:44.416434   33034 config.go:182] Loaded profile config "functional-306136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 17:32:44.417089   33034 config.go:182] Loaded profile config "functional-306136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 17:32:44.417549   33034 cli_runner.go:164] Run: docker container inspect functional-306136 --format={{.State.Status}}
I1018 17:32:44.443241   33034 ssh_runner.go:195] Run: systemctl --version
I1018 17:32:44.443306   33034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-306136
I1018 17:32:44.460758   33034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/functional-306136/id_rsa Username:docker}
I1018 17:32:44.563718   33034 build_images.go:161] Building image from path: /tmp/build.169087564.tar
I1018 17:32:44.563798   33034 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1018 17:32:44.571692   33034 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.169087564.tar
I1018 17:32:44.575930   33034 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.169087564.tar: stat -c "%s %y" /var/lib/minikube/build/build.169087564.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.169087564.tar': No such file or directory
I1018 17:32:44.575959   33034 ssh_runner.go:362] scp /tmp/build.169087564.tar --> /var/lib/minikube/build/build.169087564.tar (3072 bytes)
I1018 17:32:44.595516   33034 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.169087564
I1018 17:32:44.603445   33034 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.169087564 -xf /var/lib/minikube/build/build.169087564.tar
I1018 17:32:44.611595   33034 crio.go:315] Building image: /var/lib/minikube/build/build.169087564
I1018 17:32:44.611739   33034 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-306136 /var/lib/minikube/build/build.169087564 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1018 17:32:47.735496   33034 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-306136 /var/lib/minikube/build/build.169087564 --cgroup-manager=cgroupfs: (3.123713688s)
I1018 17:32:47.735583   33034 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.169087564
I1018 17:32:47.746517   33034 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.169087564.tar
I1018 17:32:47.755794   33034 build_images.go:217] Built localhost/my-image:functional-306136 from /tmp/build.169087564.tar
I1018 17:32:47.755823   33034 build_images.go:133] succeeded building to: functional-306136
I1018 17:32:47.755828   33034 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-306136
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-306136 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-306136 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-306136 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-306136 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 27151: os: process already finished
helpers_test.go:525: unable to kill pid 27021: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-306136 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-306136 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [6041c232-2072-4095-bdfe-ca74af963c95] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [6041c232-2072-4095-bdfe-ca74af963c95] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.002948207s
I1018 17:22:14.095288    4320 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 image rm kicbase/echo-server:functional-306136 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-306136 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.225.241 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-306136 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "357.981241ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "55.867376ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "364.003233ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "53.981196ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-306136 /tmp/TestFunctionalparallelMountCmdany-port1351639383/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760808741217767073" to /tmp/TestFunctionalparallelMountCmdany-port1351639383/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760808741217767073" to /tmp/TestFunctionalparallelMountCmdany-port1351639383/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760808741217767073" to /tmp/TestFunctionalparallelMountCmdany-port1351639383/001/test-1760808741217767073
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-306136 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (345.601955ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 17:32:21.564444    4320 retry.go:31] will retry after 747.161616ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 18 17:32 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 18 17:32 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 18 17:32 test-1760808741217767073
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh cat /mount-9p/test-1760808741217767073
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-306136 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [6c1d1297-eb3e-4039-a82b-baddbdeca21b] Pending
helpers_test.go:352: "busybox-mount" [6c1d1297-eb3e-4039-a82b-baddbdeca21b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [6c1d1297-eb3e-4039-a82b-baddbdeca21b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [6c1d1297-eb3e-4039-a82b-baddbdeca21b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004356363s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-306136 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-306136 /tmp/TestFunctionalparallelMountCmdany-port1351639383/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-306136 /tmp/TestFunctionalparallelMountCmdspecific-port4270371939/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-306136 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (409.07599ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 17:32:28.758083    4320 retry.go:31] will retry after 708.280863ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-306136 /tmp/TestFunctionalparallelMountCmdspecific-port4270371939/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-306136 ssh "sudo umount -f /mount-9p": exit status 1 (302.853602ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-306136 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-306136 /tmp/TestFunctionalparallelMountCmdspecific-port4270371939/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-306136 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3137723272/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-306136 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3137723272/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-306136 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3137723272/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-306136 ssh "findmnt -T" /mount1: exit status 1 (554.428465ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 17:32:31.120982    4320 retry.go:31] will retry after 254.061153ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-306136 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-306136 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3137723272/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-306136 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3137723272/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-306136 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3137723272/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-306136 service list -o json
functional_test.go:1504: Took "637.096096ms" to run "out/minikube-linux-arm64 -p functional-306136 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-306136
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-306136
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-306136
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (174.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1018 17:35:00.530140    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-181800 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m53.327156559s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5
ha_test.go:107: (dbg) Done: out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5: (1.028806404s)
--- PASS: TestMultiControlPlane/serial/StartCluster (174.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-181800 kubectl -- rollout status deployment/busybox: (4.34260608s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 kubectl -- exec busybox-7b57f96db7-cp9q6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 kubectl -- exec busybox-7b57f96db7-fbwpv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 kubectl -- exec busybox-7b57f96db7-lzcbm -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 kubectl -- exec busybox-7b57f96db7-cp9q6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 kubectl -- exec busybox-7b57f96db7-fbwpv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 kubectl -- exec busybox-7b57f96db7-lzcbm -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 kubectl -- exec busybox-7b57f96db7-cp9q6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 kubectl -- exec busybox-7b57f96db7-fbwpv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 kubectl -- exec busybox-7b57f96db7-lzcbm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 kubectl -- exec busybox-7b57f96db7-cp9q6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 kubectl -- exec busybox-7b57f96db7-cp9q6 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 kubectl -- exec busybox-7b57f96db7-fbwpv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 kubectl -- exec busybox-7b57f96db7-fbwpv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 kubectl -- exec busybox-7b57f96db7-lzcbm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 kubectl -- exec busybox-7b57f96db7-lzcbm -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 node add --alsologtostderr -v 5
E1018 17:36:23.608206    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-181800 node add --alsologtostderr -v 5: (58.467439204s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5: (1.062154729s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-181800 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.043798573s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-181800 status --output json --alsologtostderr -v 5: (1.026278791s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 cp testdata/cp-test.txt ha-181800:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 cp ha-181800:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1463328482/001/cp-test_ha-181800.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 cp ha-181800:/home/docker/cp-test.txt ha-181800-m02:/home/docker/cp-test_ha-181800_ha-181800-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800-m02 "sudo cat /home/docker/cp-test_ha-181800_ha-181800-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 cp ha-181800:/home/docker/cp-test.txt ha-181800-m03:/home/docker/cp-test_ha-181800_ha-181800-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800-m03 "sudo cat /home/docker/cp-test_ha-181800_ha-181800-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 cp ha-181800:/home/docker/cp-test.txt ha-181800-m04:/home/docker/cp-test_ha-181800_ha-181800-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800-m04 "sudo cat /home/docker/cp-test_ha-181800_ha-181800-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 cp testdata/cp-test.txt ha-181800-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 cp ha-181800-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1463328482/001/cp-test_ha-181800-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 cp ha-181800-m02:/home/docker/cp-test.txt ha-181800:/home/docker/cp-test_ha-181800-m02_ha-181800.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800 "sudo cat /home/docker/cp-test_ha-181800-m02_ha-181800.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 cp ha-181800-m02:/home/docker/cp-test.txt ha-181800-m03:/home/docker/cp-test_ha-181800-m02_ha-181800-m03.txt
E1018 17:37:03.654965    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:37:03.661311    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:37:03.672722    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:37:03.694059    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:37:03.735414    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800-m02 "sudo cat /home/docker/cp-test.txt"
E1018 17:37:03.816828    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:37:03.978348    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800-m03 "sudo cat /home/docker/cp-test_ha-181800-m02_ha-181800-m03.txt"
E1018 17:37:04.300602    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 cp ha-181800-m02:/home/docker/cp-test.txt ha-181800-m04:/home/docker/cp-test_ha-181800-m02_ha-181800-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800-m02 "sudo cat /home/docker/cp-test.txt"
E1018 17:37:04.942840    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800-m04 "sudo cat /home/docker/cp-test_ha-181800-m02_ha-181800-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 cp testdata/cp-test.txt ha-181800-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800-m03 "sudo cat /home/docker/cp-test.txt"
E1018 17:37:06.225073    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 cp ha-181800-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1463328482/001/cp-test_ha-181800-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 cp ha-181800-m03:/home/docker/cp-test.txt ha-181800:/home/docker/cp-test_ha-181800-m03_ha-181800.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800 "sudo cat /home/docker/cp-test_ha-181800-m03_ha-181800.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 cp ha-181800-m03:/home/docker/cp-test.txt ha-181800-m02:/home/docker/cp-test_ha-181800-m03_ha-181800-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800-m03 "sudo cat /home/docker/cp-test.txt"
E1018 17:37:08.786381    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800-m02 "sudo cat /home/docker/cp-test_ha-181800-m03_ha-181800-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 cp ha-181800-m03:/home/docker/cp-test.txt ha-181800-m04:/home/docker/cp-test_ha-181800-m03_ha-181800-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800-m04 "sudo cat /home/docker/cp-test_ha-181800-m03_ha-181800-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 cp testdata/cp-test.txt ha-181800-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1463328482/001/cp-test_ha-181800-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt ha-181800:/home/docker/cp-test_ha-181800-m04_ha-181800.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800 "sudo cat /home/docker/cp-test_ha-181800-m04_ha-181800.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt ha-181800-m02:/home/docker/cp-test_ha-181800-m04_ha-181800-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800-m02 "sudo cat /home/docker/cp-test_ha-181800-m04_ha-181800-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 cp ha-181800-m04:/home/docker/cp-test.txt ha-181800-m03:/home/docker/cp-test_ha-181800-m04_ha-181800-m03.txt
E1018 17:37:13.907914    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 ssh -n ha-181800-m03 "sudo cat /home/docker/cp-test_ha-181800-m04_ha-181800-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 node stop m02 --alsologtostderr -v 5
E1018 17:37:24.149619    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-181800 node stop m02 --alsologtostderr -v 5: (12.102487923s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5: exit status 7 (799.954313ms)

                                                
                                                
-- stdout --
	ha-181800
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181800-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-181800-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-181800-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 17:37:27.109845   48007 out.go:360] Setting OutFile to fd 1 ...
	I1018 17:37:27.110010   48007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:37:27.110021   48007 out.go:374] Setting ErrFile to fd 2...
	I1018 17:37:27.110027   48007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 17:37:27.110285   48007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 17:37:27.110467   48007 out.go:368] Setting JSON to false
	I1018 17:37:27.110501   48007 mustload.go:65] Loading cluster: ha-181800
	I1018 17:37:27.110597   48007 notify.go:220] Checking for updates...
	I1018 17:37:27.111003   48007 config.go:182] Loaded profile config "ha-181800": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 17:37:27.111025   48007 status.go:174] checking status of ha-181800 ...
	I1018 17:37:27.111870   48007 cli_runner.go:164] Run: docker container inspect ha-181800 --format={{.State.Status}}
	I1018 17:37:27.131976   48007 status.go:371] ha-181800 host status = "Running" (err=<nil>)
	I1018 17:37:27.131998   48007 host.go:66] Checking if "ha-181800" exists ...
	I1018 17:37:27.132273   48007 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800
	I1018 17:37:27.174898   48007 host.go:66] Checking if "ha-181800" exists ...
	I1018 17:37:27.175186   48007 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:37:27.175228   48007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800
	I1018 17:37:27.194603   48007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800/id_rsa Username:docker}
	I1018 17:37:27.298684   48007 ssh_runner.go:195] Run: systemctl --version
	I1018 17:37:27.305560   48007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 17:37:27.321805   48007 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 17:37:27.412065   48007 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-18 17:37:27.402171558 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 17:37:27.412596   48007 kubeconfig.go:125] found "ha-181800" server: "https://192.168.49.254:8443"
	I1018 17:37:27.412633   48007 api_server.go:166] Checking apiserver status ...
	I1018 17:37:27.412678   48007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:37:27.425072   48007 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1240/cgroup
	I1018 17:37:27.433896   48007 api_server.go:182] apiserver freezer: "9:freezer:/docker/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/crio/crio-9e6561cd9cb083f1a2ecfdb88f2ba361a17d076033cf4709e127991fe1e24a7d"
	I1018 17:37:27.433969   48007 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5743bf3218eb5aef405eed98d37c004771c8345cbd69290b8943dc0c7cbde7c2/crio/crio-9e6561cd9cb083f1a2ecfdb88f2ba361a17d076033cf4709e127991fe1e24a7d/freezer.state
	I1018 17:37:27.441895   48007 api_server.go:204] freezer state: "THAWED"
	I1018 17:37:27.441924   48007 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1018 17:37:27.450618   48007 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1018 17:37:27.450645   48007 status.go:463] ha-181800 apiserver status = Running (err=<nil>)
	I1018 17:37:27.450657   48007 status.go:176] ha-181800 status: &{Name:ha-181800 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 17:37:27.450703   48007 status.go:174] checking status of ha-181800-m02 ...
	I1018 17:37:27.451035   48007 cli_runner.go:164] Run: docker container inspect ha-181800-m02 --format={{.State.Status}}
	I1018 17:37:27.472038   48007 status.go:371] ha-181800-m02 host status = "Stopped" (err=<nil>)
	I1018 17:37:27.472065   48007 status.go:384] host is not running, skipping remaining checks
	I1018 17:37:27.472071   48007 status.go:176] ha-181800-m02 status: &{Name:ha-181800-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 17:37:27.472091   48007 status.go:174] checking status of ha-181800-m03 ...
	I1018 17:37:27.472390   48007 cli_runner.go:164] Run: docker container inspect ha-181800-m03 --format={{.State.Status}}
	I1018 17:37:27.491067   48007 status.go:371] ha-181800-m03 host status = "Running" (err=<nil>)
	I1018 17:37:27.491096   48007 host.go:66] Checking if "ha-181800-m03" exists ...
	I1018 17:37:27.491471   48007 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m03
	I1018 17:37:27.513325   48007 host.go:66] Checking if "ha-181800-m03" exists ...
	I1018 17:37:27.513721   48007 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:37:27.513774   48007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m03
	I1018 17:37:27.531541   48007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m03/id_rsa Username:docker}
	I1018 17:37:27.634691   48007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 17:37:27.648340   48007 kubeconfig.go:125] found "ha-181800" server: "https://192.168.49.254:8443"
	I1018 17:37:27.648374   48007 api_server.go:166] Checking apiserver status ...
	I1018 17:37:27.648434   48007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 17:37:27.660976   48007 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup
	I1018 17:37:27.671778   48007 api_server.go:182] apiserver freezer: "9:freezer:/docker/6e29b1a2ea23e8609bcfc8711bff9a6dbb15b68717dfe0ab3be5b31ef80495ff/crio/crio-3890c3f9aa1271612019326382a29e02141d12a00268027c0bd9a0bec0522795"
	I1018 17:37:27.671858   48007 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6e29b1a2ea23e8609bcfc8711bff9a6dbb15b68717dfe0ab3be5b31ef80495ff/crio/crio-3890c3f9aa1271612019326382a29e02141d12a00268027c0bd9a0bec0522795/freezer.state
	I1018 17:37:27.679298   48007 api_server.go:204] freezer state: "THAWED"
	I1018 17:37:27.679323   48007 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1018 17:37:27.687655   48007 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1018 17:37:27.687683   48007 status.go:463] ha-181800-m03 apiserver status = Running (err=<nil>)
	I1018 17:37:27.687692   48007 status.go:176] ha-181800-m03 status: &{Name:ha-181800-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 17:37:27.687708   48007 status.go:174] checking status of ha-181800-m04 ...
	I1018 17:37:27.688003   48007 cli_runner.go:164] Run: docker container inspect ha-181800-m04 --format={{.State.Status}}
	I1018 17:37:27.707270   48007 status.go:371] ha-181800-m04 host status = "Running" (err=<nil>)
	I1018 17:37:27.707293   48007 host.go:66] Checking if "ha-181800-m04" exists ...
	I1018 17:37:27.707580   48007 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-181800-m04
	I1018 17:37:27.726249   48007 host.go:66] Checking if "ha-181800-m04" exists ...
	I1018 17:37:27.726563   48007 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 17:37:27.726617   48007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-181800-m04
	I1018 17:37:27.744290   48007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/ha-181800-m04/id_rsa Username:docker}
	I1018 17:37:27.846507   48007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 17:37:27.861383   48007 status.go:176] ha-181800-m04 status: &{Name:ha-181800-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (109.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 node start m02 --alsologtostderr -v 5
E1018 17:37:44.631034    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 17:38:25.593141    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-181800 node start m02 --alsologtostderr -v 5: (1m47.924141259s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-181800 status --alsologtostderr -v 5: (1.207558955s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (109.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.274810746s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.28s)

                                                
                                    
x
+
TestJSONOutput/start/Command (84.55s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-310292 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1018 17:53:26.717579    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-310292 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m24.536846277s)
--- PASS: TestJSONOutput/start/Command (84.55s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-310292 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-310292 --output=json --user=testUser: (5.841988332s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-939925 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-939925 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (93.556916ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1eb68622-ebea-4b4b-bc85-c8618ced9bb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-939925] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b21675df-66d6-4829-88b2-f2583a7db8a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21409"}}
	{"specversion":"1.0","id":"afbcf5e5-35db-45c7-8aaf-7f204824bb52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"af9b5813-c4d7-4495-b9a0-a618fb508b5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig"}}
	{"specversion":"1.0","id":"ebeda1b2-377d-43e1-a726-df74a37b1f21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube"}}
	{"specversion":"1.0","id":"844c7b48-0021-4f3a-b464-93ce6edf4728","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"6ffe0a6f-3edb-445a-bdac-d5afb1c363ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5c24c4a5-7510-4ea9-981a-482c1786f19f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-939925" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-939925
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (48.7s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-795642 --network=
E1018 17:55:00.530902    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-795642 --network=: (46.486917544s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-795642" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-795642
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-795642: (2.186404628s)
--- PASS: TestKicCustomNetwork/create_custom_network (48.70s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.03s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-461694 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-461694 --network=bridge: (34.85799237s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-461694" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-461694
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-461694: (2.146603175s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.03s)

                                                
                                    
x
+
TestKicExistingNetwork (37.42s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1018 17:56:10.112082    4320 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1018 17:56:10.129119    4320 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1018 17:56:10.129188    4320 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1018 17:56:10.129205    4320 cli_runner.go:164] Run: docker network inspect existing-network
W1018 17:56:10.145651    4320 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1018 17:56:10.145684    4320 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1018 17:56:10.145700    4320 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1018 17:56:10.145807    4320 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1018 17:56:10.161491    4320 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-903568cdf824 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:7a:80:c0:8c:ed} reservation:<nil>}
I1018 17:56:10.161784    4320 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400184d260}
I1018 17:56:10.161809    4320 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1018 17:56:10.161859    4320 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1018 17:56:10.225403    4320 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-433910 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-433910 --network=existing-network: (35.203875326s)
helpers_test.go:175: Cleaning up "existing-network-433910" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-433910
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-433910: (2.07000084s)
I1018 17:56:47.516170    4320 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (37.42s)

                                                
                                    
x
+
TestKicCustomSubnet (36.31s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-945462 --subnet=192.168.60.0/24
E1018 17:57:03.654799    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-945462 --subnet=192.168.60.0/24: (34.053196245s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-945462 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-945462" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-945462
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-945462: (2.238286486s)
--- PASS: TestKicCustomSubnet (36.31s)

                                                
                                    
x
+
TestKicStaticIP (36.5s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-723443 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-723443 --static-ip=192.168.200.200: (33.883366079s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-723443 ip
helpers_test.go:175: Cleaning up "static-ip-723443" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-723443
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-723443: (2.468546531s)
--- PASS: TestKicStaticIP (36.50s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (67.34s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-256448 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-256448 --driver=docker  --container-runtime=crio: (27.971028015s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-259155 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-259155 --driver=docker  --container-runtime=crio: (33.811704553s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-256448
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-259155
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-259155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-259155
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-259155: (2.074126721s)
helpers_test.go:175: Cleaning up "first-256448" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-256448
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-256448: (1.999119621s)
--- PASS: TestMinikubeProfile (67.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (11.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-016812 --memory=3072 --mount-string /tmp/TestMountStartserial3239438307/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-016812 --memory=3072 --mount-string /tmp/TestMountStartserial3239438307/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (10.054395066s)
--- PASS: TestMountStart/serial/StartWithMountFirst (11.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-016812 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-019126 --memory=3072 --mount-string /tmp/TestMountStartserial3239438307/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-019126 --memory=3072 --mount-string /tmp/TestMountStartserial3239438307/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (9.216610274s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-019126 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-016812 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-016812 --alsologtostderr -v=5: (1.718997778s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-019126 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-019126
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-019126: (1.289586552s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.95s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-019126
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-019126: (6.94862026s)
--- PASS: TestMountStart/serial/RestartStopped (7.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-019126 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (138.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-876639 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1018 18:00:00.545990    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-876639 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m17.791314113s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (138.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876639 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876639 -- rollout status deployment/busybox
E1018 18:02:03.655490    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-876639 -- rollout status deployment/busybox: (5.336909124s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876639 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876639 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876639 -- exec busybox-7b57f96db7-2wggl -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876639 -- exec busybox-7b57f96db7-f5wk2 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876639 -- exec busybox-7b57f96db7-2wggl -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876639 -- exec busybox-7b57f96db7-f5wk2 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876639 -- exec busybox-7b57f96db7-2wggl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876639 -- exec busybox-7b57f96db7-f5wk2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.30s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876639 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876639 -- exec busybox-7b57f96db7-2wggl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876639 -- exec busybox-7b57f96db7-2wggl -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876639 -- exec busybox-7b57f96db7-f5wk2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-876639 -- exec busybox-7b57f96db7-f5wk2 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (59.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-876639 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-876639 -v=5 --alsologtostderr: (59.2711258s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (59.97s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-876639 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 cp testdata/cp-test.txt multinode-876639:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 ssh -n multinode-876639 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 cp multinode-876639:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2458187458/001/cp-test_multinode-876639.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 ssh -n multinode-876639 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 cp multinode-876639:/home/docker/cp-test.txt multinode-876639-m02:/home/docker/cp-test_multinode-876639_multinode-876639-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 ssh -n multinode-876639 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 ssh -n multinode-876639-m02 "sudo cat /home/docker/cp-test_multinode-876639_multinode-876639-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 cp multinode-876639:/home/docker/cp-test.txt multinode-876639-m03:/home/docker/cp-test_multinode-876639_multinode-876639-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 ssh -n multinode-876639 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 ssh -n multinode-876639-m03 "sudo cat /home/docker/cp-test_multinode-876639_multinode-876639-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 cp testdata/cp-test.txt multinode-876639-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 ssh -n multinode-876639-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 cp multinode-876639-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2458187458/001/cp-test_multinode-876639-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 ssh -n multinode-876639-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 cp multinode-876639-m02:/home/docker/cp-test.txt multinode-876639:/home/docker/cp-test_multinode-876639-m02_multinode-876639.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 ssh -n multinode-876639-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 ssh -n multinode-876639 "sudo cat /home/docker/cp-test_multinode-876639-m02_multinode-876639.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 cp multinode-876639-m02:/home/docker/cp-test.txt multinode-876639-m03:/home/docker/cp-test_multinode-876639-m02_multinode-876639-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 ssh -n multinode-876639-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 ssh -n multinode-876639-m03 "sudo cat /home/docker/cp-test_multinode-876639-m02_multinode-876639-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 cp testdata/cp-test.txt multinode-876639-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 ssh -n multinode-876639-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 cp multinode-876639-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2458187458/001/cp-test_multinode-876639-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 ssh -n multinode-876639-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 cp multinode-876639-m03:/home/docker/cp-test.txt multinode-876639:/home/docker/cp-test_multinode-876639-m03_multinode-876639.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 ssh -n multinode-876639-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 ssh -n multinode-876639 "sudo cat /home/docker/cp-test_multinode-876639-m03_multinode-876639.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 cp multinode-876639-m03:/home/docker/cp-test.txt multinode-876639-m02:/home/docker/cp-test_multinode-876639-m03_multinode-876639-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 ssh -n multinode-876639-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 ssh -n multinode-876639-m02 "sudo cat /home/docker/cp-test_multinode-876639-m03_multinode-876639-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-876639 node stop m03: (1.314960165s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-876639 status: exit status 7 (542.531437ms)

                                                
                                                
-- stdout --
	multinode-876639
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-876639-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-876639-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-876639 status --alsologtostderr: exit status 7 (545.351612ms)

                                                
                                                
-- stdout --
	multinode-876639
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-876639-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-876639-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 18:03:22.589727  123977 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:03:22.589912  123977 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:03:22.589937  123977 out.go:374] Setting ErrFile to fd 2...
	I1018 18:03:22.589955  123977 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:03:22.590378  123977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:03:22.590659  123977 out.go:368] Setting JSON to false
	I1018 18:03:22.590744  123977 mustload.go:65] Loading cluster: multinode-876639
	I1018 18:03:22.591513  123977 config.go:182] Loaded profile config "multinode-876639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:03:22.591801  123977 status.go:174] checking status of multinode-876639 ...
	I1018 18:03:22.592563  123977 cli_runner.go:164] Run: docker container inspect multinode-876639 --format={{.State.Status}}
	I1018 18:03:22.591772  123977 notify.go:220] Checking for updates...
	I1018 18:03:22.612161  123977 status.go:371] multinode-876639 host status = "Running" (err=<nil>)
	I1018 18:03:22.612186  123977 host.go:66] Checking if "multinode-876639" exists ...
	I1018 18:03:22.612492  123977 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-876639
	I1018 18:03:22.641108  123977 host.go:66] Checking if "multinode-876639" exists ...
	I1018 18:03:22.641431  123977 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 18:03:22.641480  123977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-876639
	I1018 18:03:22.664178  123977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/multinode-876639/id_rsa Username:docker}
	I1018 18:03:22.766565  123977 ssh_runner.go:195] Run: systemctl --version
	I1018 18:03:22.773099  123977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:03:22.786657  123977 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:03:22.843364  123977 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-18 18:03:22.834204476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:03:22.843908  123977 kubeconfig.go:125] found "multinode-876639" server: "https://192.168.67.2:8443"
	I1018 18:03:22.843948  123977 api_server.go:166] Checking apiserver status ...
	I1018 18:03:22.843998  123977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 18:03:22.855923  123977 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1262/cgroup
	I1018 18:03:22.865441  123977 api_server.go:182] apiserver freezer: "9:freezer:/docker/58295bbde4f7f0af5101307497139d417b7722cce4683fd6fb300eabaf68ee9f/crio/crio-584e5388547a3736bb2393b14aa2593a634ca1b93d18021dc386b59511f1b714"
	I1018 18:03:22.865522  123977 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/58295bbde4f7f0af5101307497139d417b7722cce4683fd6fb300eabaf68ee9f/crio/crio-584e5388547a3736bb2393b14aa2593a634ca1b93d18021dc386b59511f1b714/freezer.state
	I1018 18:03:22.873136  123977 api_server.go:204] freezer state: "THAWED"
	I1018 18:03:22.873168  123977 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1018 18:03:22.881462  123977 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1018 18:03:22.881490  123977 status.go:463] multinode-876639 apiserver status = Running (err=<nil>)
	I1018 18:03:22.881502  123977 status.go:176] multinode-876639 status: &{Name:multinode-876639 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 18:03:22.881520  123977 status.go:174] checking status of multinode-876639-m02 ...
	I1018 18:03:22.881825  123977 cli_runner.go:164] Run: docker container inspect multinode-876639-m02 --format={{.State.Status}}
	I1018 18:03:22.899711  123977 status.go:371] multinode-876639-m02 host status = "Running" (err=<nil>)
	I1018 18:03:22.899733  123977 host.go:66] Checking if "multinode-876639-m02" exists ...
	I1018 18:03:22.900032  123977 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-876639-m02
	I1018 18:03:22.918078  123977 host.go:66] Checking if "multinode-876639-m02" exists ...
	I1018 18:03:22.918407  123977 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 18:03:22.918453  123977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-876639-m02
	I1018 18:03:22.936229  123977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21409-2509/.minikube/machines/multinode-876639-m02/id_rsa Username:docker}
	I1018 18:03:23.038440  123977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 18:03:23.051712  123977 status.go:176] multinode-876639-m02 status: &{Name:multinode-876639-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1018 18:03:23.051746  123977 status.go:174] checking status of multinode-876639-m03 ...
	I1018 18:03:23.052056  123977 cli_runner.go:164] Run: docker container inspect multinode-876639-m03 --format={{.State.Status}}
	I1018 18:03:23.071164  123977 status.go:371] multinode-876639-m03 host status = "Stopped" (err=<nil>)
	I1018 18:03:23.071188  123977 status.go:384] host is not running, skipping remaining checks
	I1018 18:03:23.071195  123977 status.go:176] multinode-876639-m03 status: &{Name:multinode-876639-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-876639 node start m03 -v=5 --alsologtostderr: (7.411963577s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-876639
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-876639
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-876639: (25.022962454s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-876639 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-876639 --wait=true -v=5 --alsologtostderr: (54.053742468s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-876639
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.22s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-876639 node delete m03: (5.006312284s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 stop
E1018 18:05:00.529879    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-876639 stop: (23.819560267s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-876639 status: exit status 7 (88.665163ms)

                                                
                                                
-- stdout --
	multinode-876639
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-876639-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-876639 status --alsologtostderr: exit status 7 (83.665771ms)

                                                
                                                
-- stdout --
	multinode-876639
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-876639-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 18:05:20.100446  131771 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:05:20.100552  131771 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:05:20.100561  131771 out.go:374] Setting ErrFile to fd 2...
	I1018 18:05:20.100567  131771 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:05:20.100826  131771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:05:20.101054  131771 out.go:368] Setting JSON to false
	I1018 18:05:20.101104  131771 mustload.go:65] Loading cluster: multinode-876639
	I1018 18:05:20.101199  131771 notify.go:220] Checking for updates...
	I1018 18:05:20.101495  131771 config.go:182] Loaded profile config "multinode-876639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:05:20.101507  131771 status.go:174] checking status of multinode-876639 ...
	I1018 18:05:20.102322  131771 cli_runner.go:164] Run: docker container inspect multinode-876639 --format={{.State.Status}}
	I1018 18:05:20.121253  131771 status.go:371] multinode-876639 host status = "Stopped" (err=<nil>)
	I1018 18:05:20.121273  131771 status.go:384] host is not running, skipping remaining checks
	I1018 18:05:20.121279  131771 status.go:176] multinode-876639 status: &{Name:multinode-876639 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 18:05:20.121308  131771 status.go:174] checking status of multinode-876639-m02 ...
	I1018 18:05:20.121609  131771 cli_runner.go:164] Run: docker container inspect multinode-876639-m02 --format={{.State.Status}}
	I1018 18:05:20.139093  131771 status.go:371] multinode-876639-m02 host status = "Stopped" (err=<nil>)
	I1018 18:05:20.139114  131771 status.go:384] host is not running, skipping remaining checks
	I1018 18:05:20.139119  131771 status.go:176] multinode-876639-m02 status: &{Name:multinode-876639-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-876639 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-876639 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (52.361130245s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-876639 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.08s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-876639
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-876639-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-876639-m02 --driver=docker  --container-runtime=crio: exit status 14 (88.754564ms)

                                                
                                                
-- stdout --
	* [multinode-876639-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-876639-m02' is duplicated with machine name 'multinode-876639-m02' in profile 'multinode-876639'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-876639-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-876639-m03 --driver=docker  --container-runtime=crio: (34.232275625s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-876639
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-876639: exit status 80 (331.849784ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-876639 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-876639-m03 already exists in multinode-876639-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-876639-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-876639-m03: (2.08312914s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.79s)

                                                
                                    
x
+
TestPreload (127.12s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-546135 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1018 18:07:03.655347    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-546135 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m1.422499296s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-546135 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-546135 image pull gcr.io/k8s-minikube/busybox: (2.262392748s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-546135
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-546135: (5.911597419s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-546135 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-546135 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (54.795179936s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-546135 image list
helpers_test.go:175: Cleaning up "test-preload-546135" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-546135
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-546135: (2.502134957s)
--- PASS: TestPreload (127.12s)

                                                
                                    
x
+
TestScheduledStopUnix (106.77s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-441375 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-441375 --memory=3072 --driver=docker  --container-runtime=crio: (30.441835967s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-441375 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-441375 -n scheduled-stop-441375
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-441375 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1018 18:09:32.410550    4320 retry.go:31] will retry after 141.535µs: open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/scheduled-stop-441375/pid: no such file or directory
I1018 18:09:32.411687    4320 retry.go:31] will retry after 173.512µs: open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/scheduled-stop-441375/pid: no such file or directory
I1018 18:09:32.412790    4320 retry.go:31] will retry after 307.472µs: open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/scheduled-stop-441375/pid: no such file or directory
I1018 18:09:32.413859    4320 retry.go:31] will retry after 425.47µs: open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/scheduled-stop-441375/pid: no such file or directory
I1018 18:09:32.414961    4320 retry.go:31] will retry after 286.635µs: open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/scheduled-stop-441375/pid: no such file or directory
I1018 18:09:32.416067    4320 retry.go:31] will retry after 1.060989ms: open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/scheduled-stop-441375/pid: no such file or directory
I1018 18:09:32.417160    4320 retry.go:31] will retry after 1.482111ms: open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/scheduled-stop-441375/pid: no such file or directory
I1018 18:09:32.419303    4320 retry.go:31] will retry after 1.085353ms: open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/scheduled-stop-441375/pid: no such file or directory
I1018 18:09:32.421510    4320 retry.go:31] will retry after 2.910419ms: open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/scheduled-stop-441375/pid: no such file or directory
I1018 18:09:32.424683    4320 retry.go:31] will retry after 3.376763ms: open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/scheduled-stop-441375/pid: no such file or directory
I1018 18:09:32.428899    4320 retry.go:31] will retry after 8.597596ms: open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/scheduled-stop-441375/pid: no such file or directory
I1018 18:09:32.438033    4320 retry.go:31] will retry after 8.119006ms: open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/scheduled-stop-441375/pid: no such file or directory
I1018 18:09:32.447227    4320 retry.go:31] will retry after 11.13649ms: open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/scheduled-stop-441375/pid: no such file or directory
I1018 18:09:32.459462    4320 retry.go:31] will retry after 27.257454ms: open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/scheduled-stop-441375/pid: no such file or directory
I1018 18:09:32.487699    4320 retry.go:31] will retry after 37.243999ms: open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/scheduled-stop-441375/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-441375 --cancel-scheduled
E1018 18:09:43.613542    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-441375 -n scheduled-stop-441375
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-441375
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-441375 --schedule 15s
E1018 18:10:00.530381    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:10:06.719497    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-441375
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-441375: exit status 7 (77.279518ms)

                                                
                                                
-- stdout --
	scheduled-stop-441375
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-441375 -n scheduled-stop-441375
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-441375 -n scheduled-stop-441375: exit status 7 (85.752589ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-441375" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-441375
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-441375: (4.70465208s)
--- PASS: TestScheduledStopUnix (106.77s)

                                                
                                    
x
+
TestInsufficientStorage (14.57s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-583912 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-583912 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.732739455s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8a2fbebc-c012-45e9-964f-55b81e15f7c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-583912] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7628301b-494f-4c8e-8d0a-f259f1cdd393","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21409"}}
	{"specversion":"1.0","id":"249aadbb-aebb-4d38-ab7e-6eacf0c6c4ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ffebec8c-fd57-420c-91f1-fcd60b1625a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig"}}
	{"specversion":"1.0","id":"87b20b5c-bd45-43d4-a5c9-694ec498928b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube"}}
	{"specversion":"1.0","id":"a07daa8d-c837-4bea-9875-cc8869cc9bd3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"bb63d858-fff5-43a9-9fb8-3e9347413f84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0f88ece5-ebe6-415c-9eb2-fa3688af18e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"340a5398-cbe1-447b-9eda-5d5c2be58e3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2e32b767-2e5e-4b44-875e-fbb2d0c1ed80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7182b690-8496-461b-a775-67adace912c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"dca7d20a-780c-4f5a-b52e-c6b69bd6dbf3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-583912\" primary control-plane node in \"insufficient-storage-583912\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2f659f68-d808-448a-ab02-ee66c1914253","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760609789-21757 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"bf468441-29ef-4c60-a5ce-4076a3236cf9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9ea39b53-8192-4bb0-b8d9-5e888b8f2585","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-583912 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-583912 --output=json --layout=cluster: exit status 7 (563.711884ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-583912","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-583912","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1018 18:11:00.485443  147932 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-583912" does not appear in /home/jenkins/minikube-integration/21409-2509/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-583912 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-583912 --output=json --layout=cluster: exit status 7 (310.756104ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-583912","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-583912","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1018 18:11:00.798458  147995 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-583912" does not appear in /home/jenkins/minikube-integration/21409-2509/kubeconfig
	E1018 18:11:00.808468  147995 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/insufficient-storage-583912/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-583912" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-583912
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-583912: (1.957825145s)
--- PASS: TestInsufficientStorage (14.57s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (51.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2351364980 start -p running-upgrade-837868 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2351364980 start -p running-upgrade-837868 --memory=3072 --vm-driver=docker  --container-runtime=crio: (32.558058157s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-837868 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1018 18:15:00.530595    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-837868 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (16.13312914s)
helpers_test.go:175: Cleaning up "running-upgrade-837868" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-837868
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-837868: (2.072237105s)
--- PASS: TestRunningBinaryUpgrade (51.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (208.39s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-868767 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-868767 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.819382833s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-868767
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-868767: (1.464721787s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-868767 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-868767 status --format={{.Host}}: exit status 7 (114.112812ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-868767 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-868767 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m58.261287162s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-868767 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-868767 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-868767 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (106.91756ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-868767] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-868767
	    minikube start -p kubernetes-upgrade-868767 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8687672 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-868767 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-868767 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-868767 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.281093346s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-868767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-868767
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-868767: (2.185585464s)
--- PASS: TestKubernetesUpgrade (208.39s)

                                                
                                    
x
+
TestMissingContainerUpgrade (121.71s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1781448515 start -p missing-upgrade-155481 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1781448515 start -p missing-upgrade-155481 --memory=3072 --driver=docker  --container-runtime=crio: (1m4.524020179s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-155481
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-155481
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-155481 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-155481 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (52.576269815s)
helpers_test.go:175: Cleaning up "missing-upgrade-155481" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-155481
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-155481: (2.804971625s)
--- PASS: TestMissingContainerUpgrade (121.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-201554 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-201554 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (99.359036ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-201554] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-201554 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-201554 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (44.576560284s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-201554 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (26.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-201554 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1018 18:12:03.656081    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-201554 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.600979021s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-201554 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-201554 status -o json: exit status 2 (298.814817ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-201554","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-201554
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-201554: (2.444059337s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (26.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-201554 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-201554 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.527326884s)
--- PASS: TestNoKubernetes/serial/Start (9.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-201554 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-201554 "sudo systemctl is-active --quiet service kubelet": exit status 1 (267.5201ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-201554
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-201554: (1.309691901s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-201554 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-201554 --driver=docker  --container-runtime=crio: (8.245289941s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-201554 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-201554 "sudo systemctl is-active --quiet service kubelet": exit status 1 (446.450409ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (67.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1176980599 start -p stopped-upgrade-021252 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1176980599 start -p stopped-upgrade-021252 --memory=3072 --vm-driver=docker  --container-runtime=crio: (46.676655467s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1176980599 -p stopped-upgrade-021252 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1176980599 -p stopped-upgrade-021252 stop: (1.372969857s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-021252 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-021252 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.802192708s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (67.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-021252
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-021252: (1.22614507s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                    
x
+
TestPause/serial/Start (94.09s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-321903 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-321903 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m34.088897219s)
--- PASS: TestPause/serial/Start (94.09s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28.64s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-321903 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-321903 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.605049642s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-111074 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-111074 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (185.174366ms)

                                                
                                                
-- stdout --
	* [false-111074] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 18:16:47.017330  179970 out.go:360] Setting OutFile to fd 1 ...
	I1018 18:16:47.017454  179970 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:16:47.017462  179970 out.go:374] Setting ErrFile to fd 2...
	I1018 18:16:47.017467  179970 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 18:16:47.017851  179970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-2509/.minikube/bin
	I1018 18:16:47.019191  179970 out.go:368] Setting JSON to false
	I1018 18:16:47.020064  179970 start.go:131] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7156,"bootTime":1760804251,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1018 18:16:47.020185  179970 start.go:141] virtualization:  
	I1018 18:16:47.023778  179970 out.go:179] * [false-111074] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 18:16:47.027868  179970 out.go:179]   - MINIKUBE_LOCATION=21409
	I1018 18:16:47.027979  179970 notify.go:220] Checking for updates...
	I1018 18:16:47.033844  179970 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 18:16:47.037100  179970 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-2509/kubeconfig
	I1018 18:16:47.040059  179970 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-2509/.minikube
	I1018 18:16:47.043048  179970 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 18:16:47.045970  179970 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 18:16:47.049526  179970 config.go:182] Loaded profile config "pause-321903": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 18:16:47.049664  179970 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 18:16:47.074527  179970 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 18:16:47.074707  179970 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 18:16:47.131497  179970 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 18:16:47.12228014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 18:16:47.131600  179970 docker.go:318] overlay module found
	I1018 18:16:47.134768  179970 out.go:179] * Using the docker driver based on user configuration
	I1018 18:16:47.137686  179970 start.go:305] selected driver: docker
	I1018 18:16:47.137708  179970 start.go:925] validating driver "docker" against <nil>
	I1018 18:16:47.137722  179970 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 18:16:47.141393  179970 out.go:203] 
	W1018 18:16:47.144324  179970 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1018 18:16:47.147227  179970 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-111074 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-111074

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-111074

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-111074

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-111074

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-111074

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-111074

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-111074

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-111074

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-111074

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-111074

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-111074

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-111074" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-111074" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 18:15:47 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-321903
contexts:
- context:
cluster: pause-321903
extensions:
- extension:
last-update: Sat, 18 Oct 2025 18:15:47 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-321903
name: pause-321903
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-321903
user:
client-certificate: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/pause-321903/client.crt
client-key: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/pause-321903/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-111074

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-111074"

                                                
                                                
----------------------- debugLogs end: false-111074 [took: 3.50514202s] --------------------------------
helpers_test.go:175: Cleaning up "false-111074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-111074
--- PASS: TestNetworkPlugins/group/false (3.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (63.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-918475 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-918475 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m3.891171166s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (63.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-918475 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d5268bf2-03ea-4390-b3f8-efc451427c93] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d5268bf2-03ea-4390-b3f8-efc451427c93] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004183638s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-918475 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-918475 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-918475 --alsologtostderr -v=3: (11.983547628s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-918475 -n old-k8s-version-918475
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-918475 -n old-k8s-version-918475: exit status 7 (81.388449ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-918475 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (52.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-918475 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1018 18:20:00.530024    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-918475 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.633518341s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-918475 -n old-k8s-version-918475
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (52.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-4dr8k" [7a344bb7-dbef-407e-a17b-95ee3212304e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003845649s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-4dr8k" [7a344bb7-dbef-407e-a17b-95ee3212304e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00389155s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-918475 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-918475 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-192562 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-192562 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m26.8389716s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-213943 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1018 18:22:03.654776    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-213943 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m24.616026149s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-192562 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3e78f628-7e13-41fa-9490-a4c4f9ae21c7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3e78f628-7e13-41fa-9490-a4c4f9ae21c7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00398128s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-192562 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-192562 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-192562 --alsologtostderr -v=3: (12.036120912s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-192562 -n default-k8s-diff-port-192562
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-192562 -n default-k8s-diff-port-192562: exit status 7 (71.97154ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-192562 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-192562 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-192562 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (55.38583855s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-192562 -n default-k8s-diff-port-192562
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-213943 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [adef2fd3-de79-4e18-84a0-fe55d89ee37d] Pending
helpers_test.go:352: "busybox" [adef2fd3-de79-4e18-84a0-fe55d89ee37d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [adef2fd3-de79-4e18-84a0-fe55d89ee37d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004053037s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-213943 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-213943 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-213943 --alsologtostderr -v=3: (12.777546159s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-213943 -n embed-certs-213943
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-213943 -n embed-certs-213943: exit status 7 (76.89409ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-213943 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (58.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-213943 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-213943 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (57.917669077s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-213943 -n embed-certs-213943
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (58.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mq728" [c89d6f50-171d-4fdc-9aaf-1c535f5db829] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003100177s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mq728" [c89d6f50-171d-4fdc-9aaf-1c535f5db829] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003867601s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-192562 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-192562 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (67.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-729957 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-729957 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m7.944579985s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (67.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nmbd5" [0c6652bc-b6db-4827-91e8-190090a50541] Running
E1018 18:24:23.738362    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:24:23.744737    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:24:23.756402    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:24:23.778556    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:24:23.819921    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:24:23.901254    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:24:24.063246    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:24:24.384836    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:24:25.026680    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:24:26.308004    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:24:28.869503    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00432419s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nmbd5" [0c6652bc-b6db-4827-91e8-190090a50541] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003932353s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-213943 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E1018 18:24:33.991735    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-213943 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-530891 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1018 18:25:00.531114    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:25:04.716405    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-530891 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (40.279728399s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-729957 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e89c4c23-75f1-45fd-a06e-77828509a4b3] Pending
helpers_test.go:352: "busybox" [e89c4c23-75f1-45fd-a06e-77828509a4b3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e89c4c23-75f1-45fd-a06e-77828509a4b3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.002873646s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-729957 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-530891 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-530891 --alsologtostderr -v=3: (1.600007415s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-530891 -n newest-cni-530891
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-530891 -n newest-cni-530891: exit status 7 (111.284443ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-530891 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-530891 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-530891 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (15.902521565s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-530891 -n newest-cni-530891
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-729957 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-729957 --alsologtostderr -v=3: (12.235288521s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-729957 -n no-preload-729957
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-729957 -n no-preload-729957: exit status 7 (131.293997ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-729957 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (52.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-729957 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1018 18:25:45.677702    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-729957 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (52.169357044s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-729957 -n no-preload-729957
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (52.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-530891 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (87.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-111074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1018 18:26:23.615040    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-111074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m27.111299427s)
--- PASS: TestNetworkPlugins/group/auto/Start (87.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dq5cz" [c399e443-ef3f-4155-9f03-484901165b54] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003950689s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dq5cz" [c399e443-ef3f-4155-9f03-484901165b54] Running
E1018 18:26:46.720914    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003182809s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-729957 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-729957 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (86.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-111074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1018 18:27:03.654893    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:27:07.599408    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-111074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m26.75079189s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (86.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-111074 "pgrep -a kubelet"
I1018 18:27:24.989664    4320 config.go:182] Loaded profile config "auto-111074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-111074 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-v24m2" [a4610e6f-e9b8-4d4b-996e-a3168be9ec69] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1018 18:27:29.096903    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/default-k8s-diff-port-192562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:27:29.103425    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/default-k8s-diff-port-192562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:27:29.114953    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/default-k8s-diff-port-192562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:27:29.136537    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/default-k8s-diff-port-192562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:27:29.178041    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/default-k8s-diff-port-192562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:27:29.259590    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/default-k8s-diff-port-192562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:27:29.421564    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/default-k8s-diff-port-192562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:27:29.743292    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/default-k8s-diff-port-192562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:27:30.385742    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/default-k8s-diff-port-192562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-v24m2" [a4610e6f-e9b8-4d4b-996e-a3168be9ec69] Running
E1018 18:27:31.669007    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/default-k8s-diff-port-192562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:27:34.230487    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/default-k8s-diff-port-192562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.008195615s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-111074 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-111074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-111074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (62.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-111074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1018 18:28:10.077668    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/default-k8s-diff-port-192562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-111074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m2.07932094s)
--- PASS: TestNetworkPlugins/group/calico/Start (62.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-d7vdr" [4bd45f5a-81b9-4780-a25d-ff84e06ff946] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004284154s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-111074 "pgrep -a kubelet"
I1018 18:28:30.783898    4320 config.go:182] Loaded profile config "kindnet-111074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-111074 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mbp84" [78a4f041-ef67-4ffe-a605-5c44a42e2559] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mbp84" [78a4f041-ef67-4ffe-a605-5c44a42e2559] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003626785s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-111074 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-111074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-111074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-p6wwt" [1de9f649-cd78-42d4-9c38-49bca3e7daad] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-p6wwt" [1de9f649-cd78-42d4-9c38-49bca3e7daad] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005169621s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-111074 "pgrep -a kubelet"
I1018 18:29:07.317504    4320 config.go:182] Loaded profile config "calico-111074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-111074 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6m8ht" [200fc3b2-cef3-438d-9d12-1400167b1ac1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6m8ht" [200fc3b2-cef3-438d-9d12-1400167b1ac1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004213718s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-111074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-111074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m5.591296499s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-111074 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-111074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-111074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (75.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-111074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1018 18:29:51.441124    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/old-k8s-version-918475/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:30:00.530287    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/addons-164474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:30:12.961385    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/default-k8s-diff-port-192562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-111074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m15.909809732s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (75.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-111074 "pgrep -a kubelet"
I1018 18:30:14.394162    4320 config.go:182] Loaded profile config "custom-flannel-111074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-111074 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-x8cmz" [95176527-322f-496c-85e3-41a8732b5fa1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1018 18:30:19.267778    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:30:19.274099    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:30:19.285405    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:30:19.306672    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:30:19.348012    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:30:19.429245    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:30:19.590665    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:30:19.912679    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:30:20.554635    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-x8cmz" [95176527-322f-496c-85e3-41a8732b5fa1] Running
E1018 18:30:21.836391    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 18:30:24.398570    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.006638806s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-111074 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-111074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-111074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (65.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-111074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1018 18:31:00.250189    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-111074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m5.022508516s)
--- PASS: TestNetworkPlugins/group/flannel/Start (65.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-111074 "pgrep -a kubelet"
I1018 18:31:02.176487    4320 config.go:182] Loaded profile config "enable-default-cni-111074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-111074 replace --force -f testdata/netcat-deployment.yaml
I1018 18:31:02.555465    4320 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ntr2b" [345513e5-5ad6-4207-a3c5-a93858df5522] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ntr2b" [345513e5-5ad6-4207-a3c5-a93858df5522] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004082581s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-111074 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-111074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-111074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-111074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1018 18:31:41.211578    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-111074 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m20.985408579s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-smswp" [c40308e4-2a02-48e4-87a2-8d39458a008c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.059532218s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-111074 "pgrep -a kubelet"
I1018 18:32:00.793552    4320 config.go:182] Loaded profile config "flannel-111074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-111074 replace --force -f testdata/netcat-deployment.yaml
I1018 18:32:01.123299    4320 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4c2pj" [3de1fc51-a65e-4aea-b7e4-31568f57e8e4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1018 18:32:03.655400    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/functional-306136/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-4c2pj" [3de1fc51-a65e-4aea-b7e4-31568f57e8e4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004079185s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-111074 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-111074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-111074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-111074 "pgrep -a kubelet"
I1018 18:32:59.767940    4320 config.go:182] Loaded profile config "bridge-111074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-111074 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2ccwf" [ce0e9206-a48f-4010-9371-111e135699bb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1018 18:33:03.133082    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/no-preload-729957/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-2ccwf" [ce0e9206-a48f-4010-9371-111e135699bb] Running
E1018 18:33:06.285387    4320 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/auto-111074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003607322s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-111074 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-111074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-111074 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-146837 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-146837" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-146837
--- SKIP: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-747178" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-747178
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-111074 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-111074

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-111074

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-111074

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-111074

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-111074

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-111074

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-111074

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-111074

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-111074

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-111074

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-111074

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-111074" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-111074" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 18:15:47 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-321903
contexts:
- context:
cluster: pause-321903
extensions:
- extension:
last-update: Sat, 18 Oct 2025 18:15:47 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-321903
name: pause-321903
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-321903
user:
client-certificate: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/pause-321903/client.crt
client-key: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/pause-321903/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-111074

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-111074"

                                                
                                                
----------------------- debugLogs end: kubenet-111074 [took: 3.71952119s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-111074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-111074
--- SKIP: TestNetworkPlugins/group/kubenet (3.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-111074 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-111074

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-111074

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-111074

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-111074

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-111074

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-111074

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-111074

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-111074

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-111074

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-111074

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-111074

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-111074" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-111074

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-111074

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-111074

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-111074

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-111074" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-111074" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21409-2509/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 18:16:54 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-321903
contexts:
- context:
cluster: pause-321903
extensions:
- extension:
last-update: Sat, 18 Oct 2025 18:16:54 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-321903
name: pause-321903
current-context: pause-321903
kind: Config
preferences: {}
users:
- name: pause-321903
user:
client-certificate: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/pause-321903/client.crt
client-key: /home/jenkins/minikube-integration/21409-2509/.minikube/profiles/pause-321903/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-111074

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-111074" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-111074"

                                                
                                                
----------------------- debugLogs end: cilium-111074 [took: 5.927718599s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-111074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-111074
--- SKIP: TestNetworkPlugins/group/cilium (6.16s)

                                                
                                    
Copied to clipboard